Test Report: QEMU_macOS 15452

                    
                      a814542ad3f862deaa139b0e8d9c91b365126bac:2023-07-06:30013
                    
                

Test fail (89/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.65
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.57
22 TestAddons/Setup 43.91
23 TestCertOptions 10.01
24 TestCertExpiration 197.4
25 TestDockerFlags 10.07
26 TestForceSystemdFlag 10.05
27 TestForceSystemdEnv 11.22
72 TestFunctional/parallel/ServiceCmdConnect 44.25
139 TestImageBuild/serial/BuildWithBuildArg 1.1
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 49.28
151 TestJSONOutput/start/Command 17.63
157 TestJSONOutput/pause/Command 1.81
163 TestJSONOutput/unpause/Command 1.46
183 TestMountStart/serial/StartWithMountFirst 10.48
186 TestMultiNode/serial/FreshStart2Nodes 9.84
187 TestMultiNode/serial/DeployApp2Nodes 119.49
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.17
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.36
195 TestMultiNode/serial/DeleteNode 0.09
196 TestMultiNode/serial/StopMultiNode 0.14
197 TestMultiNode/serial/RestartMultiNode 5.25
198 TestMultiNode/serial/ValidateNameConflict 20.01
202 TestPreload 10.19
204 TestScheduledStopUnix 10.15
205 TestSkaffold 11.75
208 TestRunningBinaryUpgrade 167.44
210 TestKubernetesUpgrade 15.22
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.38
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.58
225 TestStoppedBinaryUpgrade/Setup 175.83
227 TestPause/serial/Start 9.98
237 TestNoKubernetes/serial/StartWithK8s 9.71
238 TestNoKubernetes/serial/StartWithStopK8s 5.3
239 TestNoKubernetes/serial/Start 5.31
243 TestNoKubernetes/serial/StartNoArgs 5.31
245 TestNetworkPlugins/group/auto/Start 9.7
246 TestNetworkPlugins/group/kindnet/Start 9.8
247 TestNetworkPlugins/group/calico/Start 9.7
248 TestNetworkPlugins/group/custom-flannel/Start 9.71
249 TestNetworkPlugins/group/false/Start 9.75
250 TestNetworkPlugins/group/enable-default-cni/Start 9.66
251 TestNetworkPlugins/group/flannel/Start 9.89
252 TestNetworkPlugins/group/bridge/Start 9.84
253 TestNetworkPlugins/group/kubenet/Start 9.67
255 TestStartStop/group/old-k8s-version/serial/FirstStart 9.78
256 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
257 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
260 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
261 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
262 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
263 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
264 TestStartStop/group/old-k8s-version/serial/Pause 0.1
266 TestStartStop/group/no-preload/serial/FirstStart 9.78
267 TestStartStop/group/no-preload/serial/DeployApp 0.09
268 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
271 TestStartStop/group/no-preload/serial/SecondStart 5.25
272 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
273 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
274 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
275 TestStartStop/group/no-preload/serial/Pause 0.1
277 TestStartStop/group/embed-certs/serial/FirstStart 9.83
278 TestStoppedBinaryUpgrade/Upgrade 2.32
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.81
282 TestStartStop/group/embed-certs/serial/DeployApp 0.09
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/embed-certs/serial/SecondStart 5.25
287 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
291 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.2
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
295 TestStartStop/group/embed-certs/serial/Pause 0.1
297 TestStartStop/group/newest-cni/serial/FirstStart 9.76
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/SecondStart 5.25
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (17.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.648137542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db1b11c5-3e91-455e-b78a-fa5c48e9bbb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-524000] minikube v1.30.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"72e4db6a-f83f-42ef-949d-4a7c98edda6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"458c9f14-77bc-4e2c-9262-39b727e282a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig"}}
	{"specversion":"1.0","id":"7567c7a9-b353-441c-81b7-75f2ce155d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"516502e0-3aab-4791-9c35-e9a6b40845c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"89e12138-4ab9-4b8e-84f7-4ba8a44a91db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube"}}
	{"specversion":"1.0","id":"6755f9c6-458d-4d69-8c20-6213345c1de0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"82224ddf-dea0-48cd-a495-0d9c9e044e57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"31e6e525-0034-4f24-84ed-200635df5765","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3d5d3722-f9aa-4ae9-b82d-0c90fe85a958","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"76849044-2051-406e-a9df-eec85069e387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-524000 in cluster download-only-524000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"33f9b16e-8eda-4951-96c6-fa11a35ad5f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"842a953c-3f6c-432a-abba-288cf71d87b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0] Decompressors:map[bz2:0x14000056f30 gz:0x14000056f38 tar:0x14000056ec0 tar.bz2:0x14000056ed0 tar.gz:0x14000056ee0 tar.xz:0x14000056ef0 tar.zst:0x14000056f20 tbz2:0x14000056ed0 tgz:0x140000
56ee0 txz:0x14000056ef0 tzst:0x14000056f20 xz:0x14000056f60 zip:0x14000056f70 zst:0x14000056f68] Getters:map[file:0x14000500db0 http:0x14000ac6140 https:0x14000ac61e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"c8efa577-cb73-451a-b1f6-a91534873f91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 10:56:11.799791    2467 out.go:296] Setting OutFile to fd 1 ...
	I0706 10:56:11.799923    2467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:11.799926    2467 out.go:309] Setting ErrFile to fd 2...
	I0706 10:56:11.799929    2467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:11.800028    2467 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	W0706 10:56:11.800104    2467 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15452-1247/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15452-1247/.minikube/config/config.json: no such file or directory
	I0706 10:56:11.801259    2467 out.go:303] Setting JSON to true
	I0706 10:56:11.817606    2467 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1543,"bootTime":1688664628,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 10:56:11.817682    2467 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 10:56:11.823247    2467 out.go:97] [download-only-524000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 10:56:11.824518    2467 out.go:169] MINIKUBE_LOCATION=15452
	I0706 10:56:11.823379    2467 notify.go:220] Checking for updates...
	W0706 10:56:11.823407    2467 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball: no such file or directory
	I0706 10:56:11.831288    2467 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 10:56:11.834189    2467 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 10:56:11.837241    2467 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 10:56:11.840290    2467 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	W0706 10:56:11.845173    2467 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0706 10:56:11.845374    2467 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 10:56:11.849268    2467 out.go:97] Using the qemu2 driver based on user configuration
	I0706 10:56:11.849276    2467 start.go:297] selected driver: qemu2
	I0706 10:56:11.849278    2467 start.go:944] validating driver "qemu2" against <nil>
	I0706 10:56:11.849364    2467 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 10:56:11.853151    2467 out.go:169] Automatically selected the socket_vmnet network
	I0706 10:56:11.858283    2467 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0706 10:56:11.858377    2467 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 10:56:11.858432    2467 cni.go:84] Creating CNI manager for ""
	I0706 10:56:11.858451    2467 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 10:56:11.858458    2467 start_flags.go:319] config:
	{Name:download-only-524000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-524000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 10:56:11.863867    2467 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 10:56:11.867210    2467 out.go:97] Downloading VM boot image ...
	I0706 10:56:11.867252    2467 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso
	I0706 10:56:18.692505    2467 out.go:97] Starting control plane node download-only-524000 in cluster download-only-524000
	I0706 10:56:18.692532    2467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 10:56:18.744782    2467 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 10:56:18.744838    2467 cache.go:57] Caching tarball of preloaded images
	I0706 10:56:18.745019    2467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 10:56:18.749860    2467 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0706 10:56:18.749867    2467 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:18.825772    2467 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 10:56:28.398125    2467 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:28.398264    2467 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:29.039846    2467 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0706 10:56:29.040020    2467 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/download-only-524000/config.json ...
	I0706 10:56:29.040047    2467 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/download-only-524000/config.json: {Name:mk30c2d20c5bd9770d6187b8d336f7f3af194d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:56:29.040275    2467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 10:56:29.040437    2467 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0706 10:56:29.380326    2467 out.go:169] 
	W0706 10:56:29.384534    2467 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0] Decompressors:map[bz2:0x14000056f30 gz:0x14000056f38 tar:0x14000056ec0 tar.bz2:0x14000056ed0 tar.gz:0x14000056ee0 tar.xz:0x14000056ef0 tar.zst:0x14000056f20 tbz2:0x14000056ed0 tgz:0x14000056ee0 txz:0x14000056ef0 tzst:0x14000056f20 xz:0x14000056f60 zip:0x14000056f70 zst:0x14000056f68] Getters:map[file:0x14000500db0 http:0x14000ac6140 https:0x14000ac61e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0706 10:56:29.384559    2467 out_reason.go:110] 
	W0706 10:56:29.390495    2467 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 10:56:29.394478    2467 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-524000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (17.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-578000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-578000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.4023535s)

                                                
                                                
-- stdout --
	* [offline-docker-578000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-578000 in cluster offline-docker-578000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:11:29.939396    4094 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:11:29.939528    4094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:29.939531    4094 out.go:309] Setting ErrFile to fd 2...
	I0706 11:11:29.939533    4094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:29.939606    4094 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:11:29.940687    4094 out.go:303] Setting JSON to false
	I0706 11:11:29.957280    4094 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2461,"bootTime":1688664628,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:11:29.957357    4094 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:11:29.961424    4094 out.go:177] * [offline-docker-578000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:11:29.973597    4094 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:11:29.977517    4094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:11:29.973622    4094 notify.go:220] Checking for updates...
	I0706 11:11:29.983501    4094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:11:29.986558    4094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:11:29.989527    4094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:11:29.992595    4094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:11:29.995863    4094 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:11:29.995939    4094 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:11:29.999500    4094 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:11:30.006509    4094 start.go:297] selected driver: qemu2
	I0706 11:11:30.006517    4094 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:11:30.006527    4094 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:11:30.008391    4094 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:11:30.011512    4094 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:11:30.014613    4094 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:11:30.014631    4094 cni.go:84] Creating CNI manager for ""
	I0706 11:11:30.014638    4094 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:11:30.014640    4094 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:11:30.014646    4094 start_flags.go:319] config:
	{Name:offline-docker-578000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0}
	I0706 11:11:30.018759    4094 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:11:30.021507    4094 out.go:177] * Starting control plane node offline-docker-578000 in cluster offline-docker-578000
	I0706 11:11:30.029374    4094 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:11:30.029409    4094 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:11:30.029420    4094 cache.go:57] Caching tarball of preloaded images
	I0706 11:11:30.029502    4094 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:11:30.029509    4094 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:11:30.029574    4094 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/offline-docker-578000/config.json ...
	I0706 11:11:30.029585    4094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/offline-docker-578000/config.json: {Name:mk255e9332a03b6e9e9d393be862ab0e9c3590d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:11:30.029767    4094 start.go:365] acquiring machines lock for offline-docker-578000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:30.029794    4094 start.go:369] acquired machines lock for "offline-docker-578000" in 21.75µs
	I0706 11:11:30.029806    4094 start.go:93] Provisioning new machine with config: &{Name:offline-docker-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:30.029833    4094 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:30.033559    4094 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:30.047622    4094 start.go:159] libmachine.API.Create for "offline-docker-578000" (driver="qemu2")
	I0706 11:11:30.047650    4094 client.go:168] LocalClient.Create starting
	I0706 11:11:30.047714    4094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:30.047735    4094 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:30.047747    4094 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:30.047793    4094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:30.047811    4094 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:30.047818    4094 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:30.048142    4094 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:30.605554    4094 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:30.676760    4094 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:30.676771    4094 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:30.676941    4094 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2
	I0706 11:11:30.686102    4094 main.go:141] libmachine: STDOUT: 
	I0706 11:11:30.686120    4094 main.go:141] libmachine: STDERR: 
	I0706 11:11:30.686186    4094 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2 +20000M
	I0706 11:11:30.693901    4094 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:30.693917    4094 main.go:141] libmachine: STDERR: 
	I0706 11:11:30.693941    4094 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2
	I0706 11:11:30.693950    4094 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:30.693994    4094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a0:8c:f8:da:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2
	I0706 11:11:30.695789    4094 main.go:141] libmachine: STDOUT: 
	I0706 11:11:30.695803    4094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:30.695821    4094 client.go:171] LocalClient.Create took 648.168917ms
	I0706 11:11:32.697868    4094 start.go:128] duration metric: createHost completed in 2.668036459s
	I0706 11:11:32.697899    4094 start.go:83] releasing machines lock for "offline-docker-578000", held for 2.668109584s
	W0706 11:11:32.697936    4094 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:32.714203    4094 out.go:177] * Deleting "offline-docker-578000" in qemu2 ...
	W0706 11:11:32.726074    4094 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:32.726087    4094 start.go:687] Will try again in 5 seconds ...
	I0706 11:11:37.728227    4094 start.go:365] acquiring machines lock for offline-docker-578000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:37.728371    4094 start.go:369] acquired machines lock for "offline-docker-578000" in 89.709µs
	I0706 11:11:37.728406    4094 start.go:93] Provisioning new machine with config: &{Name:offline-docker-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:37.728465    4094 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:37.789151    4094 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:37.810545    4094 start.go:159] libmachine.API.Create for "offline-docker-578000" (driver="qemu2")
	I0706 11:11:37.810574    4094 client.go:168] LocalClient.Create starting
	I0706 11:11:37.810683    4094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:37.810721    4094 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:37.810735    4094 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:37.810788    4094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:37.810807    4094 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:37.810818    4094 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:37.811103    4094 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:38.098772    4094 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:38.246485    4094 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:38.246495    4094 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:38.246736    4094 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2
	I0706 11:11:38.260115    4094 main.go:141] libmachine: STDOUT: 
	I0706 11:11:38.260136    4094 main.go:141] libmachine: STDERR: 
	I0706 11:11:38.260219    4094 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2 +20000M
	I0706 11:11:38.270487    4094 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:38.270507    4094 main.go:141] libmachine: STDERR: 
	I0706 11:11:38.270521    4094 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2
	I0706 11:11:38.270530    4094 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:38.270573    4094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:18:c5:51:77:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/offline-docker-578000/disk.qcow2
	I0706 11:11:38.272892    4094 main.go:141] libmachine: STDOUT: 
	I0706 11:11:38.272914    4094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:38.272928    4094 client.go:171] LocalClient.Create took 462.350917ms
	I0706 11:11:40.275096    4094 start.go:128] duration metric: createHost completed in 2.546610875s
	I0706 11:11:40.275207    4094 start.go:83] releasing machines lock for "offline-docker-578000", held for 2.546829209s
	W0706 11:11:40.275618    4094 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:40.283135    4094 out.go:177] 
	W0706 11:11:40.287193    4094 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:11:40.287219    4094 out.go:239] * 
	* 
	W0706 11:11:40.289631    4094 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:11:40.301208    4094 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-578000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-07-06 11:11:40.315917 -0700 PDT m=+928.466661959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-578000 -n offline-docker-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-578000 -n offline-docker-578000: exit status 7 (66.5585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-578000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-578000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-578000
--- FAIL: TestOffline (10.57s)

                                                
                                    
x
+
TestAddons/Setup (43.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-163000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-163000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (43.908183708s)

                                                
                                                
-- stdout --
	* [addons-163000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-163000 in cluster addons-163000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	* Verifying ingress addon...
	* Verifying registry addon...
	* Verifying csi-hostpath-driver addon...
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 10:56:39.721942    2542 out.go:296] Setting OutFile to fd 1 ...
	I0706 10:56:39.722073    2542 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:39.722076    2542 out.go:309] Setting ErrFile to fd 2...
	I0706 10:56:39.722078    2542 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:39.722145    2542 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 10:56:39.723162    2542 out.go:303] Setting JSON to false
	I0706 10:56:39.738589    2542 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1571,"bootTime":1688664628,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 10:56:39.738668    2542 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 10:56:39.742295    2542 out.go:177] * [addons-163000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 10:56:39.749246    2542 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 10:56:39.749330    2542 notify.go:220] Checking for updates...
	I0706 10:56:39.753296    2542 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 10:56:39.756272    2542 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 10:56:39.759271    2542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 10:56:39.762215    2542 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 10:56:39.765271    2542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 10:56:39.768364    2542 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 10:56:39.772260    2542 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 10:56:39.779311    2542 start.go:297] selected driver: qemu2
	I0706 10:56:39.779320    2542 start.go:944] validating driver "qemu2" against <nil>
	I0706 10:56:39.779327    2542 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 10:56:39.781275    2542 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 10:56:39.784247    2542 out.go:177] * Automatically selected the socket_vmnet network
	I0706 10:56:39.787298    2542 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 10:56:39.787315    2542 cni.go:84] Creating CNI manager for ""
	I0706 10:56:39.787321    2542 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 10:56:39.787326    2542 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 10:56:39.787333    2542 start_flags.go:319] config:
	{Name:addons-163000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0}
	I0706 10:56:39.791168    2542 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 10:56:39.803289    2542 out.go:177] * Starting control plane node addons-163000 in cluster addons-163000
	I0706 10:56:39.807206    2542 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 10:56:39.807242    2542 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 10:56:39.807258    2542 cache.go:57] Caching tarball of preloaded images
	I0706 10:56:39.807335    2542 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 10:56:39.807341    2542 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 10:56:39.807547    2542 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/config.json ...
	I0706 10:56:39.807559    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/config.json: {Name:mk41e2f3ef09e696642244828ea0cde5ba2c8daf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:56:39.807770    2542 start.go:365] acquiring machines lock for addons-163000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 10:56:39.807888    2542 start.go:369] acquired machines lock for "addons-163000" in 112.125µs
	I0706 10:56:39.807898    2542 start.go:93] Provisioning new machine with config: &{Name:addons-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 10:56:39.807936    2542 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 10:56:39.816254    2542 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0706 10:56:40.171250    2542 start.go:159] libmachine.API.Create for "addons-163000" (driver="qemu2")
	I0706 10:56:40.171291    2542 client.go:168] LocalClient.Create starting
	I0706 10:56:40.171462    2542 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 10:56:40.334518    2542 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 10:56:40.615597    2542 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 10:56:40.808950    2542 main.go:141] libmachine: Creating SSH key...
	I0706 10:56:40.844820    2542 main.go:141] libmachine: Creating Disk image...
	I0706 10:56:40.844825    2542 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 10:56:40.845177    2542 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/disk.qcow2
	I0706 10:56:40.879784    2542 main.go:141] libmachine: STDOUT: 
	I0706 10:56:40.879809    2542 main.go:141] libmachine: STDERR: 
	I0706 10:56:40.879885    2542 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/disk.qcow2 +20000M
	I0706 10:56:40.887229    2542 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 10:56:40.887243    2542 main.go:141] libmachine: STDERR: 
	I0706 10:56:40.887262    2542 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/disk.qcow2
	I0706 10:56:40.887267    2542 main.go:141] libmachine: Starting QEMU VM...
	I0706 10:56:40.887316    2542 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:2e:9c:e5:00:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/disk.qcow2
	I0706 10:56:40.955718    2542 main.go:141] libmachine: STDOUT: 
	I0706 10:56:40.955748    2542 main.go:141] libmachine: STDERR: 
	I0706 10:56:40.955753    2542 main.go:141] libmachine: Attempt 0
	I0706 10:56:40.955770    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:42.957923    2542 main.go:141] libmachine: Attempt 1
	I0706 10:56:42.958009    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:44.960186    2542 main.go:141] libmachine: Attempt 2
	I0706 10:56:44.960231    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:46.962239    2542 main.go:141] libmachine: Attempt 3
	I0706 10:56:46.962253    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:48.964235    2542 main.go:141] libmachine: Attempt 4
	I0706 10:56:48.964244    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:50.964791    2542 main.go:141] libmachine: Attempt 5
	I0706 10:56:50.964815    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:52.966856    2542 main.go:141] libmachine: Attempt 6
	I0706 10:56:52.966891    2542 main.go:141] libmachine: Searching for b2:2e:9c:e5:0:5b in /var/db/dhcpd_leases ...
	I0706 10:56:52.967028    2542 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0706 10:56:52.967098    2542 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 10:56:52.967111    2542 main.go:141] libmachine: Found match: b2:2e:9c:e5:0:5b
	I0706 10:56:52.967123    2542 main.go:141] libmachine: IP: 192.168.105.2
	I0706 10:56:52.967139    2542 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0706 10:56:54.986519    2542 machine.go:88] provisioning docker machine ...
	I0706 10:56:54.986586    2542 buildroot.go:166] provisioning hostname "addons-163000"
	I0706 10:56:54.988087    2542 main.go:141] libmachine: Using SSH client type: native
	I0706 10:56:54.988909    2542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008cd1e0] 0x1008cfc40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0706 10:56:54.988928    2542 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-163000 && echo "addons-163000" | sudo tee /etc/hostname
	I0706 10:56:55.086335    2542 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-163000
	
	I0706 10:56:55.086462    2542 main.go:141] libmachine: Using SSH client type: native
	I0706 10:56:55.086957    2542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008cd1e0] 0x1008cfc40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0706 10:56:55.086973    2542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-163000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-163000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-163000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 10:56:55.159834    2542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 10:56:55.159860    2542 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1247/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1247/.minikube}
	I0706 10:56:55.159877    2542 buildroot.go:174] setting up certificates
	I0706 10:56:55.159885    2542 provision.go:83] configureAuth start
	I0706 10:56:55.159890    2542 provision.go:138] copyHostCerts
	I0706 10:56:55.160059    2542 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem (1078 bytes)
	I0706 10:56:55.160402    2542 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem (1123 bytes)
	I0706 10:56:55.160565    2542 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem (1675 bytes)
	I0706 10:56:55.160681    2542 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem org=jenkins.addons-163000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-163000]
	I0706 10:56:55.256256    2542 provision.go:172] copyRemoteCerts
	I0706 10:56:55.256318    2542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 10:56:55.256339    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:56:55.287571    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 10:56:55.294368    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0706 10:56:55.301778    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 10:56:55.309038    2542 provision.go:86] duration metric: configureAuth took 149.154041ms
	I0706 10:56:55.309045    2542 buildroot.go:189] setting minikube options for container-runtime
	I0706 10:56:55.309153    2542 config.go:182] Loaded profile config "addons-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 10:56:55.309189    2542 main.go:141] libmachine: Using SSH client type: native
	I0706 10:56:55.309405    2542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008cd1e0] 0x1008cfc40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0706 10:56:55.309410    2542 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 10:56:55.368117    2542 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 10:56:55.368124    2542 buildroot.go:70] root file system type: tmpfs
	I0706 10:56:55.368178    2542 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 10:56:55.368223    2542 main.go:141] libmachine: Using SSH client type: native
	I0706 10:56:55.368503    2542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008cd1e0] 0x1008cfc40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0706 10:56:55.368539    2542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 10:56:55.435843    2542 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 10:56:55.435886    2542 main.go:141] libmachine: Using SSH client type: native
	I0706 10:56:55.436134    2542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008cd1e0] 0x1008cfc40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0706 10:56:55.436143    2542 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 10:56:55.791527    2542 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 10:56:55.791539    2542 machine.go:91] provisioned docker machine in 805.021208ms
	I0706 10:56:55.791544    2542 client.go:171] LocalClient.Create took 15.620768041s
	I0706 10:56:55.791561    2542 start.go:167] duration metric: libmachine.API.Create for "addons-163000" took 15.62083825s
	I0706 10:56:55.791569    2542 start.go:300] post-start starting for "addons-163000" (driver="qemu2")
	I0706 10:56:55.791573    2542 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 10:56:55.791643    2542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 10:56:55.791651    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:56:55.825145    2542 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 10:56:55.826389    2542 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 10:56:55.826403    2542 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1247/.minikube/addons for local assets ...
	I0706 10:56:55.826463    2542 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1247/.minikube/files for local assets ...
	I0706 10:56:55.826485    2542 start.go:303] post-start completed in 34.914583ms
	I0706 10:56:55.826838    2542 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/config.json ...
	I0706 10:56:55.826993    2542 start.go:128] duration metric: createHost completed in 16.019587292s
	I0706 10:56:55.827032    2542 main.go:141] libmachine: Using SSH client type: native
	I0706 10:56:55.827251    2542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008cd1e0] 0x1008cfc40 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0706 10:56:55.827256    2542 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0706 10:56:55.890902    2542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688666215.520144544
	
	I0706 10:56:55.890912    2542 fix.go:206] guest clock: 1688666215.520144544
	I0706 10:56:55.890916    2542 fix.go:219] Guest: 2023-07-06 10:56:55.520144544 -0700 PDT Remote: 2023-07-06 10:56:55.827 -0700 PDT m=+16.124527085 (delta=-306.855456ms)
	I0706 10:56:55.890934    2542 fix.go:190] guest clock delta is within tolerance: -306.855456ms
	I0706 10:56:55.890937    2542 start.go:83] releasing machines lock for "addons-163000", held for 16.083579666s
	I0706 10:56:55.891234    2542 ssh_runner.go:195] Run: cat /version.json
	I0706 10:56:55.891242    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:56:55.891256    2542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 10:56:55.891292    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:56:55.923959    2542 ssh_runner.go:195] Run: systemctl --version
	I0706 10:56:55.965714    2542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 10:56:55.967933    2542 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 10:56:55.967966    2542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 10:56:55.974039    2542 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 10:56:55.974047    2542 start.go:466] detecting cgroup driver to use...
	I0706 10:56:55.974138    2542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 10:56:55.980445    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 10:56:55.983607    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 10:56:55.986739    2542 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 10:56:55.986765    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 10:56:55.990019    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 10:56:55.993457    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 10:56:55.996416    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 10:56:55.999303    2542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 10:56:56.002426    2542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 10:56:56.005701    2542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 10:56:56.008620    2542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 10:56:56.011240    2542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 10:56:56.081944    2542 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 10:56:56.090687    2542 start.go:466] detecting cgroup driver to use...
	I0706 10:56:56.090768    2542 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 10:56:56.096316    2542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 10:56:56.106020    2542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 10:56:56.112083    2542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 10:56:56.116750    2542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 10:56:56.121169    2542 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 10:56:56.162018    2542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 10:56:56.167086    2542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 10:56:56.172533    2542 ssh_runner.go:195] Run: which cri-dockerd
	I0706 10:56:56.173853    2542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 10:56:56.176758    2542 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 10:56:56.181809    2542 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 10:56:56.262032    2542 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 10:56:56.351074    2542 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 10:56:56.351087    2542 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 10:56:56.356308    2542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 10:56:56.437196    2542 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 10:56:57.594322    2542 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157149167s)
	I0706 10:56:57.594380    2542 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 10:56:57.683668    2542 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 10:56:57.764063    2542 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 10:56:57.847748    2542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 10:56:57.926719    2542 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 10:56:57.936011    2542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 10:56:58.023084    2542 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 10:56:58.046688    2542 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 10:56:58.046787    2542 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 10:56:58.050627    2542 start.go:534] Will wait 60s for crictl version
	I0706 10:56:58.050675    2542 ssh_runner.go:195] Run: which crictl
	I0706 10:56:58.052250    2542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 10:56:58.066583    2542 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 10:56:58.066661    2542 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 10:56:58.076454    2542 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 10:56:58.091019    2542 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 10:56:58.091174    2542 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0706 10:56:58.092683    2542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 10:56:58.096254    2542 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 10:56:58.096319    2542 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 10:56:58.101713    2542 docker.go:636] Got preloaded images: 
	I0706 10:56:58.101722    2542 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0706 10:56:58.101764    2542 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 10:56:58.105192    2542 ssh_runner.go:195] Run: which lz4
	I0706 10:56:58.106648    2542 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0706 10:56:58.108032    2542 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0706 10:56:58.108042    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0706 10:56:59.392132    2542 docker.go:600] Took 1.285581 seconds to copy over tarball
	I0706 10:56:59.392192    2542 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0706 10:57:00.416323    2542 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.024153459s)
	I0706 10:57:00.416336    2542 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0706 10:57:00.431659    2542 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 10:57:00.434732    2542 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0706 10:57:00.439973    2542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 10:57:00.527283    2542 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 10:57:02.130120    2542 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.602867333s)
	I0706 10:57:02.130201    2542 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 10:57:02.136303    2542 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0706 10:57:02.136317    2542 cache_images.go:84] Images are preloaded, skipping loading
	I0706 10:57:02.136383    2542 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 10:57:02.143918    2542 cni.go:84] Creating CNI manager for ""
	I0706 10:57:02.143927    2542 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 10:57:02.143936    2542 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 10:57:02.143945    2542 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-163000 NodeName:addons-163000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 10:57:02.144024    2542 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-163000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 10:57:02.144342    2542 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-163000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 10:57:02.144407    2542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 10:57:02.148169    2542 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 10:57:02.148213    2542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0706 10:57:02.150849    2542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0706 10:57:02.155890    2542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 10:57:02.160648    2542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0706 10:57:02.165722    2542 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0706 10:57:02.167144    2542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 10:57:02.170711    2542 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000 for IP: 192.168.105.2
	I0706 10:57:02.170719    2542 certs.go:190] acquiring lock for shared ca certs: {Name:mk763e62c6a9326245ca88f64c15681d0696aa38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.170872    2542 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key
	I0706 10:57:02.232455    2542 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt ...
	I0706 10:57:02.232464    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt: {Name:mk831b56e0994684106e394b6e73915d0ce43fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.232670    2542 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key ...
	I0706 10:57:02.232675    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key: {Name:mkafae4f46a7a2ed45ad08f697fb96d104130629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.232798    2542 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key
	I0706 10:57:02.445873    2542 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.crt ...
	I0706 10:57:02.445877    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.crt: {Name:mk25f161a6857851d0d4885008113a020460051e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.446047    2542 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key ...
	I0706 10:57:02.446052    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key: {Name:mk8eaa7e4c1031e9f4123309808f5170477f2102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.446198    2542 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/client.key
	I0706 10:57:02.446208    2542 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/client.crt with IP's: []
	I0706 10:57:02.594699    2542 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/client.crt ...
	I0706 10:57:02.594712    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/client.crt: {Name:mkf8724871c0c17f1c6a78951b594b772bdfe9ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.594948    2542 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/client.key ...
	I0706 10:57:02.594952    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/client.key: {Name:mkbd0be1cd0b7410a7686f1053b6a86ad898ad2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.595073    2542 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.key.96055969
	I0706 10:57:02.595082    2542 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0706 10:57:02.779936    2542 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.crt.96055969 ...
	I0706 10:57:02.779941    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.crt.96055969: {Name:mk5f5f47881f2ae24e87094665f7e8fe1b4b181e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.780133    2542 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.key.96055969 ...
	I0706 10:57:02.780136    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.key.96055969: {Name:mkdb85651529302987d46c8ae86d7ce25950be6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:02.780251    2542 certs.go:337] copying /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.crt
	I0706 10:57:02.780357    2542 certs.go:341] copying /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.key
	I0706 10:57:02.780460    2542 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.key
	I0706 10:57:02.780470    2542 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.crt with IP's: []
	I0706 10:57:03.075625    2542 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.crt ...
	I0706 10:57:03.075639    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.crt: {Name:mk70ec259512652182c7a6e0bb2ec546dfd5201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:03.075965    2542 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.key ...
	I0706 10:57:03.075969    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.key: {Name:mk620f97d3f9a10de3ac5d83ba0f141095e6d35e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:03.076249    2542 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem (1675 bytes)
	I0706 10:57:03.076276    2542 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem (1078 bytes)
	I0706 10:57:03.076297    2542 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem (1123 bytes)
	I0706 10:57:03.076315    2542 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem (1675 bytes)
	I0706 10:57:03.076665    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0706 10:57:03.084790    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0706 10:57:03.091413    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0706 10:57:03.098243    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/addons-163000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0706 10:57:03.105585    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 10:57:03.112895    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0706 10:57:03.119748    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 10:57:03.126426    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0706 10:57:03.133791    2542 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 10:57:03.141361    2542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0706 10:57:03.150420    2542 ssh_runner.go:195] Run: openssl version
	I0706 10:57:03.152493    2542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 10:57:03.155420    2542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 10:57:03.156788    2542 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0706 10:57:03.156810    2542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 10:57:03.158608    2542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 10:57:03.161577    2542 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 10:57:03.162780    2542 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 10:57:03.162815    2542 kubeadm.go:404] StartCluster: {Name:addons-163000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-163000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 10:57:03.162887    2542 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 10:57:03.168342    2542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0706 10:57:03.171204    2542 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 10:57:03.174229    2542 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 10:57:03.177281    2542 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 10:57:03.177298    2542 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0706 10:57:03.202093    2542 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0706 10:57:03.202132    2542 kubeadm.go:322] [preflight] Running pre-flight checks
	I0706 10:57:03.262141    2542 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0706 10:57:03.262195    2542 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0706 10:57:03.262246    2542 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0706 10:57:03.317762    2542 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0706 10:57:03.327668    2542 out.go:204]   - Generating certificates and keys ...
	I0706 10:57:03.327713    2542 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0706 10:57:03.327765    2542 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0706 10:57:03.623865    2542 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0706 10:57:03.724258    2542 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0706 10:57:03.773675    2542 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0706 10:57:03.821170    2542 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0706 10:57:03.949738    2542 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0706 10:57:03.949797    2542 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-163000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0706 10:57:04.045807    2542 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0706 10:57:04.045866    2542 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-163000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0706 10:57:04.210562    2542 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0706 10:57:04.372397    2542 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0706 10:57:04.539464    2542 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0706 10:57:04.539511    2542 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0706 10:57:04.653297    2542 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0706 10:57:04.691747    2542 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0706 10:57:04.924811    2542 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0706 10:57:05.305402    2542 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0706 10:57:05.312064    2542 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 10:57:05.312134    2542 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 10:57:05.312156    2542 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0706 10:57:05.391724    2542 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0706 10:57:05.395902    2542 out.go:204]   - Booting up control plane ...
	I0706 10:57:05.395947    2542 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0706 10:57:05.396013    2542 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0706 10:57:05.396053    2542 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0706 10:57:05.396108    2542 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0706 10:57:05.396935    2542 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0706 10:57:09.400350    2542 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003185 seconds
	I0706 10:57:09.400580    2542 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0706 10:57:09.411023    2542 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0706 10:57:09.922249    2542 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0706 10:57:09.922353    2542 kubeadm.go:322] [mark-control-plane] Marking the node addons-163000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0706 10:57:10.437972    2542 kubeadm.go:322] [bootstrap-token] Using token: 0qc6mo.3wpttvsssf0bqf1q
	I0706 10:57:10.447158    2542 out.go:204]   - Configuring RBAC rules ...
	I0706 10:57:10.447326    2542 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0706 10:57:10.447468    2542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0706 10:57:10.450947    2542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0706 10:57:10.453392    2542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0706 10:57:10.455963    2542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0706 10:57:10.458577    2542 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0706 10:57:10.465756    2542 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0706 10:57:10.655134    2542 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0706 10:57:10.847130    2542 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0706 10:57:10.847744    2542 kubeadm.go:322] 
	I0706 10:57:10.847777    2542 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0706 10:57:10.847781    2542 kubeadm.go:322] 
	I0706 10:57:10.847827    2542 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0706 10:57:10.847835    2542 kubeadm.go:322] 
	I0706 10:57:10.847847    2542 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0706 10:57:10.847899    2542 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0706 10:57:10.847925    2542 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0706 10:57:10.847927    2542 kubeadm.go:322] 
	I0706 10:57:10.847951    2542 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0706 10:57:10.847954    2542 kubeadm.go:322] 
	I0706 10:57:10.847981    2542 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0706 10:57:10.847984    2542 kubeadm.go:322] 
	I0706 10:57:10.848021    2542 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0706 10:57:10.848061    2542 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0706 10:57:10.848097    2542 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0706 10:57:10.848103    2542 kubeadm.go:322] 
	I0706 10:57:10.848146    2542 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0706 10:57:10.848188    2542 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0706 10:57:10.848191    2542 kubeadm.go:322] 
	I0706 10:57:10.848239    2542 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0qc6mo.3wpttvsssf0bqf1q \
	I0706 10:57:10.848307    2542 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:54887cb817b031a56c6be5acb24737812f5477ec9674aeae1af9b05ae3868136 \
	I0706 10:57:10.848320    2542 kubeadm.go:322] 	--control-plane 
	I0706 10:57:10.848323    2542 kubeadm.go:322] 
	I0706 10:57:10.848407    2542 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0706 10:57:10.848411    2542 kubeadm.go:322] 
	I0706 10:57:10.848454    2542 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0qc6mo.3wpttvsssf0bqf1q \
	I0706 10:57:10.848507    2542 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:54887cb817b031a56c6be5acb24737812f5477ec9674aeae1af9b05ae3868136 
	I0706 10:57:10.848572    2542 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 10:57:10.848584    2542 cni.go:84] Creating CNI manager for ""
	I0706 10:57:10.848591    2542 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 10:57:10.856874    2542 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0706 10:57:10.858407    2542 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0706 10:57:10.861594    2542 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0706 10:57:10.866379    2542 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 10:57:10.866426    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:10.866453    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b minikube.k8s.io/name=addons-163000 minikube.k8s.io/updated_at=2023_07_06T10_57_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:10.917781    2542 ops.go:34] apiserver oom_adj: -16
	I0706 10:57:10.917814    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:11.459593    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:11.959506    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:12.459683    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:12.959488    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:13.459681    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:13.959666    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:14.459557    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:14.959643    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:15.459592    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:15.959600    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:16.459600    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:16.959536    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:17.459505    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:17.959342    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:18.459583    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:18.957734    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:19.459492    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:19.959211    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:20.459190    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:20.959119    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:21.459109    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:21.959096    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:22.459128    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:22.959086    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:23.459141    2542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 10:57:23.525967    2542 kubeadm.go:1081] duration metric: took 12.660000375s to wait for elevateKubeSystemPrivileges.
	I0706 10:57:23.525983    2542 kubeadm.go:406] StartCluster complete in 20.363846334s
	I0706 10:57:23.525992    2542 settings.go:142] acquiring lock: {Name:mk352fa14b583fbace5fdd55e6f9ba4f39f48007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:23.526134    2542 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 10:57:23.526318    2542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/kubeconfig: {Name:mk34623cbdb1646c9229359a97354a4ad80828c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:57:23.526545    2542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 10:57:23.526606    2542 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0706 10:57:23.526683    2542 addons.go:66] Setting volumesnapshots=true in profile "addons-163000"
	I0706 10:57:23.526694    2542 addons.go:228] Setting addon volumesnapshots=true in "addons-163000"
	I0706 10:57:23.526723    2542 addons.go:66] Setting metrics-server=true in profile "addons-163000"
	I0706 10:57:23.526747    2542 addons.go:66] Setting ingress=true in profile "addons-163000"
	I0706 10:57:23.526763    2542 addons.go:66] Setting storage-provisioner=true in profile "addons-163000"
	I0706 10:57:23.526766    2542 addons.go:66] Setting cloud-spanner=true in profile "addons-163000"
	I0706 10:57:23.526768    2542 addons.go:228] Setting addon storage-provisioner=true in "addons-163000"
	I0706 10:57:23.526771    2542 addons.go:228] Setting addon cloud-spanner=true in "addons-163000"
	I0706 10:57:23.526772    2542 addons.go:228] Setting addon ingress=true in "addons-163000"
	I0706 10:57:23.526771    2542 addons.go:66] Setting registry=true in profile "addons-163000"
	I0706 10:57:23.526781    2542 addons.go:66] Setting ingress-dns=true in profile "addons-163000"
	I0706 10:57:23.526795    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.526799    2542 addons.go:66] Setting default-storageclass=true in profile "addons-163000"
	I0706 10:57:23.526805    2542 addons.go:66] Setting inspektor-gadget=true in profile "addons-163000"
	I0706 10:57:23.526808    2542 addons.go:66] Setting gcp-auth=true in profile "addons-163000"
	I0706 10:57:23.526760    2542 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-163000"
	I0706 10:57:23.526772    2542 addons.go:228] Setting addon metrics-server=true in "addons-163000"
	I0706 10:57:23.526806    2542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-163000"
	I0706 10:57:23.526818    2542 addons.go:228] Setting addon inspektor-gadget=true in "addons-163000"
	I0706 10:57:23.526892    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.526796    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.526763    2542 config.go:182] Loaded profile config "addons-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 10:57:23.526795    2542 addons.go:228] Setting addon ingress-dns=true in "addons-163000"
	I0706 10:57:23.527103    2542 host.go:66] Checking if "addons-163000" exists ...
	W0706 10:57:23.527099    2542 host.go:54] host status for "addons-163000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	W0706 10:57:23.527121    2542 addons.go:274] "addons-163000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0706 10:57:23.526821    2542 mustload.go:65] Loading cluster: addons-163000
	I0706 10:57:23.527208    2542 config.go:182] Loaded profile config "addons-163000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 10:57:23.526846    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.526882    2542 addons.go:228] Setting addon registry=true in "addons-163000"
	I0706 10:57:23.527308    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.526899    2542 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-163000"
	I0706 10:57:23.527327    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.530982    2542 out.go:177] 
	W0706 10:57:23.527310    2542 host.go:54] host status for "addons-163000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	I0706 10:57:23.526928    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.526760    2542 host.go:66] Checking if "addons-163000" exists ...
	W0706 10:57:23.527562    2542 host.go:54] host status for "addons-163000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	W0706 10:57:23.527585    2542 host.go:54] host status for "addons-163000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	W0706 10:57:23.527681    2542 host.go:54] host status for "addons-163000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	W0706 10:57:23.527712    2542 host.go:54] host status for "addons-163000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	I0706 10:57:23.536943    2542 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	W0706 10:57:23.533898    2542 addons.go:274] "addons-163000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0706 10:57:23.533910    2542 addons.go:274] "addons-163000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0706 10:57:23.533925    2542 addons.go:274] "addons-163000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0706 10:57:23.533939    2542 addons.go:274] "addons-163000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0706 10:57:23.533954    2542 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	W0706 10:57:23.533956    2542 addons.go:274] "addons-163000" is not running, setting registry=true and skipping enablement (err=<nil>)
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/monitor: connect: connection refused
	I0706 10:57:23.535255    2542 addons.go:228] Setting addon default-storageclass=true in "addons-163000"
	I0706 10:57:23.540947    2542 addons.go:464] Verifying addon ingress=true in "addons-163000"
	I0706 10:57:23.540960    2542 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	W0706 10:57:23.540963    2542 out.go:239] * 
	I0706 10:57:23.540966    2542 addons.go:464] Verifying addon registry=true in "addons-163000"
	I0706 10:57:23.540998    2542 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-163000"
	I0706 10:57:23.544878    2542 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0706 10:57:23.550848    2542 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	* 
	I0706 10:57:23.550866    2542 host.go:66] Checking if "addons-163000" exists ...
	I0706 10:57:23.553949    2542 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0706 10:57:23.565896    2542 out.go:177] * Verifying ingress addon...
	I0706 10:57:23.550919    2542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W0706 10:57:23.551430    2542 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 10:57:23.553958    2542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0706 10:57:23.554673    2542 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0706 10:57:23.572888    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:57:23.573320    2542 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0706 10:57:23.580003    2542 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0706 10:57:23.582952    2542 out.go:177] * Verifying registry addon...
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 10:57:23.588972    2542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0706 10:57:23.591869    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:57:23.588977    2542 out.go:177] * Verifying csi-hostpath-driver addon...
	I0706 10:57:23.588997    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}
	I0706 10:57:23.589028    2542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0706 10:57:23.592092    2542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0706 10:57:23.592234    2542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0706 10:57:23.594182    2542 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0706 10:57:23.594997    2542 out.go:177] 
	I0706 10:57:23.595000    2542 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/addons-163000/id_rsa Username:docker}

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-163000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (43.91s)

                                                
                                    
x
+
TestCertOptions (10.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-106000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-106000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.730943708s)

                                                
                                                
-- stdout --
	* [cert-options-106000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-106000 in cluster cert-options-106000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-106000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-106000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-106000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-106000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-106000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (83.760875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-106000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-106000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-106000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-106000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-106000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.258417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-106000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-106000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-106000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-07-06 11:12:11.656311 -0700 PDT m=+959.807156918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-106000 -n cert-options-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-106000 -n cert-options-106000: exit status 7 (28.692334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-106000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-106000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-106000
--- FAIL: TestCertOptions (10.01s)
E0706 11:13:55.567108    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:14:23.277571    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (197.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-868000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-868000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.72029225s)

                                                
                                                
-- stdout --
	* [cert-expiration-868000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-868000 in cluster cert-expiration-868000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-868000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-868000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-868000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-868000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-868000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (7.506221s)

                                                
                                                
-- stdout --
	* [cert-expiration-868000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-868000 in cluster cert-expiration-868000
	* Restarting existing qemu2 VM for "cert-expiration-868000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-868000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-868000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-868000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-868000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-868000 in cluster cert-expiration-868000
	* Restarting existing qemu2 VM for "cert-expiration-868000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-868000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-868000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-07-06 11:15:14.032533 -0700 PDT m=+1142.183970876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-868000 -n cert-expiration-868000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-868000 -n cert-expiration-868000: exit status 7 (69.186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-868000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-868000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-868000
--- FAIL: TestCertExpiration (197.40s)

                                                
                                    
x
+
TestDockerFlags (10.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-049000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-049000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.819723042s)

                                                
                                                
-- stdout --
	* [docker-flags-049000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-049000 in cluster docker-flags-049000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-049000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:11:51.728655    4294 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:11:51.728802    4294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:51.728805    4294 out.go:309] Setting ErrFile to fd 2...
	I0706 11:11:51.728808    4294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:51.728868    4294 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:11:51.729888    4294 out.go:303] Setting JSON to false
	I0706 11:11:51.745014    4294 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2483,"bootTime":1688664628,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:11:51.745066    4294 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:11:51.750224    4294 out.go:177] * [docker-flags-049000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:11:51.758267    4294 notify.go:220] Checking for updates...
	I0706 11:11:51.762214    4294 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:11:51.765253    4294 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:11:51.768211    4294 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:11:51.771207    4294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:11:51.774241    4294 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:11:51.777247    4294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:11:51.780461    4294 config.go:182] Loaded profile config "force-systemd-flag-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:11:51.780526    4294 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:11:51.780568    4294 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:11:51.785180    4294 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:11:51.791195    4294 start.go:297] selected driver: qemu2
	I0706 11:11:51.791202    4294 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:11:51.791209    4294 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:11:51.793167    4294 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:11:51.798259    4294 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:11:51.801309    4294 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0706 11:11:51.801334    4294 cni.go:84] Creating CNI manager for ""
	I0706 11:11:51.801342    4294 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:11:51.801347    4294 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:11:51.801353    4294 start_flags.go:319] config:
	{Name:docker-flags-049000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:11:51.805763    4294 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:11:51.813254    4294 out.go:177] * Starting control plane node docker-flags-049000 in cluster docker-flags-049000
	I0706 11:11:51.817175    4294 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:11:51.817196    4294 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:11:51.817208    4294 cache.go:57] Caching tarball of preloaded images
	I0706 11:11:51.817263    4294 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:11:51.817268    4294 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:11:51.817332    4294 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/docker-flags-049000/config.json ...
	I0706 11:11:51.817344    4294 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/docker-flags-049000/config.json: {Name:mk955401db63b949b969493888873995a724b0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:11:51.817541    4294 start.go:365] acquiring machines lock for docker-flags-049000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:51.817572    4294 start.go:369] acquired machines lock for "docker-flags-049000" in 23.417µs
	I0706 11:11:51.817583    4294 start.go:93] Provisioning new machine with config: &{Name:docker-flags-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:51.817611    4294 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:51.826201    4294 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:51.842327    4294 start.go:159] libmachine.API.Create for "docker-flags-049000" (driver="qemu2")
	I0706 11:11:51.842353    4294 client.go:168] LocalClient.Create starting
	I0706 11:11:51.842406    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:51.842429    4294 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:51.842441    4294 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:51.842484    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:51.842502    4294 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:51.842510    4294 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:51.842845    4294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:51.958934    4294 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:52.105705    4294 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:52.105711    4294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:52.105876    4294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2
	I0706 11:11:52.114623    4294 main.go:141] libmachine: STDOUT: 
	I0706 11:11:52.114639    4294 main.go:141] libmachine: STDERR: 
	I0706 11:11:52.114685    4294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2 +20000M
	I0706 11:11:52.121827    4294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:52.121841    4294 main.go:141] libmachine: STDERR: 
	I0706 11:11:52.121852    4294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2
	I0706 11:11:52.121860    4294 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:52.121892    4294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:41:c2:8a:71:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2
	I0706 11:11:52.123420    4294 main.go:141] libmachine: STDOUT: 
	I0706 11:11:52.123434    4294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:52.123453    4294 client.go:171] LocalClient.Create took 281.094916ms
	I0706 11:11:54.125621    4294 start.go:128] duration metric: createHost completed in 2.307999375s
	I0706 11:11:54.125680    4294 start.go:83] releasing machines lock for "docker-flags-049000", held for 2.308107167s
	W0706 11:11:54.125740    4294 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:54.134178    4294 out.go:177] * Deleting "docker-flags-049000" in qemu2 ...
	W0706 11:11:54.153501    4294 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:54.153526    4294 start.go:687] Will try again in 5 seconds ...
	I0706 11:11:59.155806    4294 start.go:365] acquiring machines lock for docker-flags-049000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:59.157331    4294 start.go:369] acquired machines lock for "docker-flags-049000" in 1.401ms
	I0706 11:11:59.157455    4294 start.go:93] Provisioning new machine with config: &{Name:docker-flags-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:59.157740    4294 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:59.163445    4294 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:59.208732    4294 start.go:159] libmachine.API.Create for "docker-flags-049000" (driver="qemu2")
	I0706 11:11:59.208774    4294 client.go:168] LocalClient.Create starting
	I0706 11:11:59.208935    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:59.208988    4294 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:59.209013    4294 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:59.209111    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:59.209156    4294 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:59.209172    4294 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:59.209743    4294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:59.337392    4294 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:59.461375    4294 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:59.461381    4294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:59.461530    4294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2
	I0706 11:11:59.470430    4294 main.go:141] libmachine: STDOUT: 
	I0706 11:11:59.470445    4294 main.go:141] libmachine: STDERR: 
	I0706 11:11:59.470513    4294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2 +20000M
	I0706 11:11:59.477835    4294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:59.477855    4294 main.go:141] libmachine: STDERR: 
	I0706 11:11:59.477870    4294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2
	I0706 11:11:59.477876    4294 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:59.477916    4294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:89:17:0b:97:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/docker-flags-049000/disk.qcow2
	I0706 11:11:59.479472    4294 main.go:141] libmachine: STDOUT: 
	I0706 11:11:59.479486    4294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:59.479496    4294 client.go:171] LocalClient.Create took 270.715875ms
	I0706 11:12:01.481689    4294 start.go:128] duration metric: createHost completed in 2.323889167s
	I0706 11:12:01.481746    4294 start.go:83] releasing machines lock for "docker-flags-049000", held for 2.324398875s
	W0706 11:12:01.482186    4294 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:12:01.491858    4294 out.go:177] 
	W0706 11:12:01.496768    4294 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:12:01.496809    4294 out.go:239] * 
	* 
	W0706 11:12:01.499244    4294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:12:01.508704    4294 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-049000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-049000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-049000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (78.086542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-049000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-049000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-049000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-049000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-049000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-049000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.453458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-049000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-049000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-049000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-049000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-07-06 11:12:01.646046 -0700 PDT m=+949.796860043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-049000 -n docker-flags-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-049000 -n docker-flags-049000: exit status 7 (28.114459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-049000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-049000
--- FAIL: TestDockerFlags (10.07s)

                                                
                                    
x
+
TestForceSystemdFlag (10.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-413000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-413000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.832712125s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-413000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-413000 in cluster force-systemd-flag-413000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:11:46.783506    4272 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:11:46.783663    4272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:46.783666    4272 out.go:309] Setting ErrFile to fd 2...
	I0706 11:11:46.783668    4272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:46.783740    4272 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:11:46.784767    4272 out.go:303] Setting JSON to false
	I0706 11:11:46.799779    4272 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2478,"bootTime":1688664628,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:11:46.799855    4272 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:11:46.807739    4272 out.go:177] * [force-systemd-flag-413000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:11:46.810776    4272 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:11:46.815737    4272 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:11:46.810837    4272 notify.go:220] Checking for updates...
	I0706 11:11:46.821671    4272 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:11:46.824737    4272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:11:46.827686    4272 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:11:46.830668    4272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:11:46.833983    4272 config.go:182] Loaded profile config "force-systemd-env-786000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:11:46.834049    4272 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:11:46.834094    4272 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:11:46.837658    4272 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:11:46.844692    4272 start.go:297] selected driver: qemu2
	I0706 11:11:46.844698    4272 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:11:46.844704    4272 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:11:46.846679    4272 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:11:46.848091    4272 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:11:46.850767    4272 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 11:11:46.850784    4272 cni.go:84] Creating CNI manager for ""
	I0706 11:11:46.850790    4272 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:11:46.850795    4272 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:11:46.850802    4272 start_flags.go:319] config:
	{Name:force-systemd-flag-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:11:46.854817    4272 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:11:46.861664    4272 out.go:177] * Starting control plane node force-systemd-flag-413000 in cluster force-systemd-flag-413000
	I0706 11:11:46.865685    4272 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:11:46.865704    4272 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:11:46.865715    4272 cache.go:57] Caching tarball of preloaded images
	I0706 11:11:46.865765    4272 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:11:46.865770    4272 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:11:46.865819    4272 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/force-systemd-flag-413000/config.json ...
	I0706 11:11:46.865836    4272 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/force-systemd-flag-413000/config.json: {Name:mkac6215d46887b85a9ed102da738d0a06d72a21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:11:46.866041    4272 start.go:365] acquiring machines lock for force-systemd-flag-413000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:46.866071    4272 start.go:369] acquired machines lock for "force-systemd-flag-413000" in 24.292µs
	I0706 11:11:46.866083    4272 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:46.866108    4272 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:46.874685    4272 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:46.890482    4272 start.go:159] libmachine.API.Create for "force-systemd-flag-413000" (driver="qemu2")
	I0706 11:11:46.890515    4272 client.go:168] LocalClient.Create starting
	I0706 11:11:46.890568    4272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:46.890589    4272 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:46.890602    4272 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:46.890634    4272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:46.890648    4272 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:46.890654    4272 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:46.890941    4272 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:47.047683    4272 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:47.143754    4272 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:47.143766    4272 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:47.143919    4272 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I0706 11:11:47.152717    4272 main.go:141] libmachine: STDOUT: 
	I0706 11:11:47.152739    4272 main.go:141] libmachine: STDERR: 
	I0706 11:11:47.152803    4272 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2 +20000M
	I0706 11:11:47.159911    4272 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:47.159925    4272 main.go:141] libmachine: STDERR: 
	I0706 11:11:47.159940    4272 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I0706 11:11:47.159949    4272 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:47.159985    4272 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:12:46:0d:28:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I0706 11:11:47.161516    4272 main.go:141] libmachine: STDOUT: 
	I0706 11:11:47.161529    4272 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:47.161545    4272 client.go:171] LocalClient.Create took 271.026708ms
	I0706 11:11:49.163765    4272 start.go:128] duration metric: createHost completed in 2.297637208s
	I0706 11:11:49.163848    4272 start.go:83] releasing machines lock for "force-systemd-flag-413000", held for 2.297775041s
	W0706 11:11:49.163929    4272 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:49.181228    4272 out.go:177] * Deleting "force-systemd-flag-413000" in qemu2 ...
	W0706 11:11:49.196231    4272 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:49.196265    4272 start.go:687] Will try again in 5 seconds ...
	I0706 11:11:54.198546    4272 start.go:365] acquiring machines lock for force-systemd-flag-413000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:54.198997    4272 start.go:369] acquired machines lock for "force-systemd-flag-413000" in 315.333µs
	I0706 11:11:54.199158    4272 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:54.199432    4272 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:54.209075    4272 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:54.256390    4272 start.go:159] libmachine.API.Create for "force-systemd-flag-413000" (driver="qemu2")
	I0706 11:11:54.256425    4272 client.go:168] LocalClient.Create starting
	I0706 11:11:54.256563    4272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:54.256609    4272 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:54.256636    4272 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:54.256736    4272 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:54.256766    4272 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:54.256781    4272 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:54.257348    4272 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:54.387455    4272 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:54.531517    4272 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:54.531527    4272 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:54.531697    4272 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I0706 11:11:54.540939    4272 main.go:141] libmachine: STDOUT: 
	I0706 11:11:54.540953    4272 main.go:141] libmachine: STDERR: 
	I0706 11:11:54.541025    4272 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2 +20000M
	I0706 11:11:54.548442    4272 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:54.548471    4272 main.go:141] libmachine: STDERR: 
	I0706 11:11:54.548487    4272 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I0706 11:11:54.548495    4272 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:54.548527    4272 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d3:2c:a1:96:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I0706 11:11:54.550072    4272 main.go:141] libmachine: STDOUT: 
	I0706 11:11:54.550083    4272 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:54.550095    4272 client.go:171] LocalClient.Create took 293.667625ms
	I0706 11:11:56.552339    4272 start.go:128] duration metric: createHost completed in 2.352847958s
	I0706 11:11:56.552409    4272 start.go:83] releasing machines lock for "force-systemd-flag-413000", held for 2.353363625s
	W0706 11:11:56.552770    4272 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:56.560239    4272 out.go:177] 
	W0706 11:11:56.565269    4272 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:11:56.565294    4272 out.go:239] * 
	* 
	W0706 11:11:56.567908    4272 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:11:56.577174    4272 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-413000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-413000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-413000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.490709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-413000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-413000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-07-06 11:11:56.672254 -0700 PDT m=+944.823051209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-413000 -n force-systemd-flag-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-413000 -n force-systemd-flag-413000: exit status 7 (34.47725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-413000
--- FAIL: TestForceSystemdFlag (10.05s)

                                                
                                    
x
+
TestForceSystemdEnv (11.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-786000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-786000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.013809209s)

                                                
                                                
-- stdout --
	* [force-systemd-env-786000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-786000 in cluster force-systemd-env-786000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:11:40.505626    4238 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:11:40.505748    4238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:40.505751    4238 out.go:309] Setting ErrFile to fd 2...
	I0706 11:11:40.505753    4238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:11:40.505821    4238 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:11:40.506830    4238 out.go:303] Setting JSON to false
	I0706 11:11:40.522117    4238 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2472,"bootTime":1688664628,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:11:40.522181    4238 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:11:40.527261    4238 out.go:177] * [force-systemd-env-786000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:11:40.534151    4238 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:11:40.538168    4238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:11:40.534220    4238 notify.go:220] Checking for updates...
	I0706 11:11:40.544120    4238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:11:40.547147    4238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:11:40.550087    4238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:11:40.553154    4238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0706 11:11:40.556496    4238 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:11:40.556540    4238 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:11:40.560121    4238 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:11:40.567147    4238 start.go:297] selected driver: qemu2
	I0706 11:11:40.567153    4238 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:11:40.567158    4238 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:11:40.569146    4238 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:11:40.570723    4238 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:11:40.574204    4238 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 11:11:40.574220    4238 cni.go:84] Creating CNI manager for ""
	I0706 11:11:40.574227    4238 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:11:40.574230    4238 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:11:40.574237    4238 start_flags.go:319] config:
	{Name:force-systemd-env-786000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:11:40.578403    4238 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:11:40.585193    4238 out.go:177] * Starting control plane node force-systemd-env-786000 in cluster force-systemd-env-786000
	I0706 11:11:40.589107    4238 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:11:40.589128    4238 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:11:40.589145    4238 cache.go:57] Caching tarball of preloaded images
	I0706 11:11:40.589196    4238 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:11:40.589201    4238 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:11:40.589274    4238 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/force-systemd-env-786000/config.json ...
	I0706 11:11:40.589286    4238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/force-systemd-env-786000/config.json: {Name:mk07af9fee32fb25d37e7776aff6a27b01d71537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:11:40.589483    4238 start.go:365] acquiring machines lock for force-systemd-env-786000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:40.589515    4238 start.go:369] acquired machines lock for "force-systemd-env-786000" in 23.292µs
	I0706 11:11:40.589527    4238 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:40.589554    4238 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:40.598157    4238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:40.613806    4238 start.go:159] libmachine.API.Create for "force-systemd-env-786000" (driver="qemu2")
	I0706 11:11:40.613832    4238 client.go:168] LocalClient.Create starting
	I0706 11:11:40.613881    4238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:40.613899    4238 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:40.613912    4238 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:40.613952    4238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:40.613966    4238 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:40.613975    4238 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:40.614285    4238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:40.740274    4238 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:40.843639    4238 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:40.843647    4238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:40.843803    4238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2
	I0706 11:11:40.852264    4238 main.go:141] libmachine: STDOUT: 
	I0706 11:11:40.852284    4238 main.go:141] libmachine: STDERR: 
	I0706 11:11:40.852348    4238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2 +20000M
	I0706 11:11:40.859394    4238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:40.859406    4238 main.go:141] libmachine: STDERR: 
	I0706 11:11:40.859427    4238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2
	I0706 11:11:40.859433    4238 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:40.859470    4238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6c:57:17:a7:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2
	I0706 11:11:40.860971    4238 main.go:141] libmachine: STDOUT: 
	I0706 11:11:40.860983    4238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:40.861002    4238 client.go:171] LocalClient.Create took 247.167375ms
	I0706 11:11:42.863202    4238 start.go:128] duration metric: createHost completed in 2.273629375s
	I0706 11:11:42.863266    4238 start.go:83] releasing machines lock for "force-systemd-env-786000", held for 2.2737485s
	W0706 11:11:42.863329    4238 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:42.871751    4238 out.go:177] * Deleting "force-systemd-env-786000" in qemu2 ...
	W0706 11:11:42.893393    4238 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:42.893428    4238 start.go:687] Will try again in 5 seconds ...
	I0706 11:11:47.895677    4238 start.go:365] acquiring machines lock for force-systemd-env-786000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:49.164025    4238 start.go:369] acquired machines lock for "force-systemd-env-786000" in 1.268224333s
	I0706 11:11:49.164256    4238 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-786000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-786000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:49.164588    4238 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:49.174261    4238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 11:11:49.218644    4238 start.go:159] libmachine.API.Create for "force-systemd-env-786000" (driver="qemu2")
	I0706 11:11:49.218674    4238 client.go:168] LocalClient.Create starting
	I0706 11:11:49.218814    4238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:49.218858    4238 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:49.218878    4238 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:49.218957    4238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:49.218984    4238 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:49.218996    4238 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:49.219544    4238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:49.355105    4238 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:49.432842    4238 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:49.432848    4238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:49.433004    4238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2
	I0706 11:11:49.441727    4238 main.go:141] libmachine: STDOUT: 
	I0706 11:11:49.441739    4238 main.go:141] libmachine: STDERR: 
	I0706 11:11:49.441806    4238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2 +20000M
	I0706 11:11:49.448895    4238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:49.448907    4238 main.go:141] libmachine: STDERR: 
	I0706 11:11:49.448923    4238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2
	I0706 11:11:49.448929    4238 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:49.448960    4238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:27:ee:86:d5:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/force-systemd-env-786000/disk.qcow2
	I0706 11:11:49.450503    4238 main.go:141] libmachine: STDOUT: 
	I0706 11:11:49.450515    4238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:49.450539    4238 client.go:171] LocalClient.Create took 231.86125ms
	I0706 11:11:51.452741    4238 start.go:128] duration metric: createHost completed in 2.288086709s
	I0706 11:11:51.452793    4238 start.go:83] releasing machines lock for "force-systemd-env-786000", held for 2.288711459s
	W0706 11:11:51.453166    4238 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:51.461695    4238 out.go:177] 
	W0706 11:11:51.465781    4238 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:11:51.465805    4238 out.go:239] * 
	* 
	W0706 11:11:51.468225    4238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:11:51.477546    4238 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-786000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-786000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-786000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.028625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-786000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-786000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-07-06 11:11:51.571241 -0700 PDT m=+939.722021709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-786000 -n force-systemd-env-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-786000 -n force-systemd-env-786000: exit status 7 (32.77575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-786000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-786000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-786000
--- FAIL: TestForceSystemdEnv (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (44.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-802000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-802000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-lvlqs" [a16de9b3-1485-4c17-a638-a2cfe81cd7be] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-lvlqs" [a16de9b3-1485-4c17-a638-a2cfe81cd7be] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.014412333s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31451
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31451: Get "http://192.168.105.4:31451": dial tcp 192.168.105.4:31451: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-802000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-lvlqs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-802000/192.168.105.4
Start Time:       Thu, 06 Jul 2023 11:01:02 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://c0e3dead5010d5cd2f8aee30f025375341e265d12e18d49c85b689ec680d9668
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 06 Jul 2023 11:01:25 -0700
Finished:     Thu, 06 Jul 2023 11:01:25 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xfnds (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-xfnds:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  43s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-lvlqs to functional-802000
Normal   Pulling    42s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     37s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.585866212s (5.585874878s including waiting)
Normal   Created    20s (x3 over 37s)  kubelet            Created container echoserver-arm
Normal   Started    20s (x3 over 37s)  kubelet            Started container echoserver-arm
Normal   Pulled     20s (x2 over 36s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    8s (x4 over 35s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-lvlqs_default(a16de9b3-1485-4c17-a638-a2cfe81cd7be)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-802000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-802000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.201.55
IPs:                      10.106.201.55
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31451/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-802000 -n functional-802000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-802000                                                                                                | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port561555418/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh -- ls                                                                                         | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh sudo                                                                                          | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-802000                                                                                                | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-802000                                                                                                | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-802000                                                                                                | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-802000 ssh findmnt                                                                                       | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-802000                                                                                                | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-802000                                                                                                | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-802000 --dry-run                                                                                      | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT |                     |
	|           | -p functional-802000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 11:01:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 11:01:44.781863    3109 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:01:44.781990    3109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:01:44.781992    3109 out.go:309] Setting ErrFile to fd 2...
	I0706 11:01:44.781995    3109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:01:44.782058    3109 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:01:44.783027    3109 out.go:303] Setting JSON to false
	I0706 11:01:44.798553    3109 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1876,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:01:44.798622    3109 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:01:44.803645    3109 out.go:177] * [functional-802000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:01:44.810663    3109 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:01:44.810702    3109 notify.go:220] Checking for updates...
	I0706 11:01:44.814737    3109 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:01:44.817678    3109 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:01:44.820686    3109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:01:44.823714    3109 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:01:44.826721    3109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:01:44.829889    3109 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:01:44.830123    3109 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:01:44.834715    3109 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:01:44.841640    3109 start.go:297] selected driver: qemu2
	I0706 11:01:44.841646    3109 start.go:944] validating driver "qemu2" against &{Name:functional-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:01:44.841711    3109 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:01:44.843617    3109 cni.go:84] Creating CNI manager for ""
	I0706 11:01:44.843629    3109 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:01:44.843635    3109 start_flags.go:319] config:
	{Name:functional-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-802000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:01:44.855627    3109 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 17:58:09 UTC, ends at Thu 2023-07-06 18:01:46 UTC. --
	Jul 06 18:01:27 functional-802000 dockerd[7254]: time="2023-07-06T18:01:27.897198374Z" level=info msg="shim disconnected" id=476ef0c84ec6715378656623e0dd18f4611a252d51ba816d563886055c578863 namespace=moby
	Jul 06 18:01:27 functional-802000 dockerd[7254]: time="2023-07-06T18:01:27.897272289Z" level=warning msg="cleaning up after shim disconnected" id=476ef0c84ec6715378656623e0dd18f4611a252d51ba816d563886055c578863 namespace=moby
	Jul 06 18:01:27 functional-802000 dockerd[7254]: time="2023-07-06T18:01:27.897292497Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:01:29 functional-802000 dockerd[7248]: time="2023-07-06T18:01:29.226885362Z" level=info msg="ignoring event" container=92a00898fbd9636b458e5f5931953b77b124193089b11cee92b3cd711f7d49e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 18:01:29 functional-802000 dockerd[7254]: time="2023-07-06T18:01:29.226950819Z" level=info msg="shim disconnected" id=92a00898fbd9636b458e5f5931953b77b124193089b11cee92b3cd711f7d49e2 namespace=moby
	Jul 06 18:01:29 functional-802000 dockerd[7254]: time="2023-07-06T18:01:29.226983818Z" level=warning msg="cleaning up after shim disconnected" id=92a00898fbd9636b458e5f5931953b77b124193089b11cee92b3cd711f7d49e2 namespace=moby
	Jul 06 18:01:29 functional-802000 dockerd[7254]: time="2023-07-06T18:01:29.227002734Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.158601968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.158684008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.158696424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.158701132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:01:30 functional-802000 dockerd[7248]: time="2023-07-06T18:01:30.204354985Z" level=info msg="ignoring event" container=a9d93da844b404ec196fd4e13e1efdd2c43cff7f35b8fd69abe93dbde63c15ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.204524606Z" level=info msg="shim disconnected" id=a9d93da844b404ec196fd4e13e1efdd2c43cff7f35b8fd69abe93dbde63c15ee namespace=moby
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.204571438Z" level=warning msg="cleaning up after shim disconnected" id=a9d93da844b404ec196fd4e13e1efdd2c43cff7f35b8fd69abe93dbde63c15ee namespace=moby
	Jul 06 18:01:30 functional-802000 dockerd[7254]: time="2023-07-06T18:01:30.204580397Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.790594076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.790734823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.790762864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.790772739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.791493641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.791513390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.791525557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:01:45 functional-802000 dockerd[7254]: time="2023-07-06T18:01:45.791530390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:01:45 functional-802000 cri-dockerd[7511]: time="2023-07-06T18:01:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8fc8130417ede10633329bddc67623df40e9ac37f265926bf8cac446157df20f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 06 18:01:45 functional-802000 cri-dockerd[7511]: time="2023-07-06T18:01:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/44f500938d04e761de66bf3b39cbeea12e96c4bbc438dd2bffb063d578950bea/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	a9d93da844b40       72565bf5bbedf                                                                                         16 seconds ago       Exited              echoserver-arm            2                   fb99c3afb1cdf
	476ef0c84ec67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 seconds ago       Exited              mount-munger              0                   92a00898fbd96
	c0e3dead5010d       72565bf5bbedf                                                                                         21 seconds ago       Exited              echoserver-arm            2                   2e96e52f27b82
	459a709fd7620       nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef                         36 seconds ago       Running             myfrontend                0                   d1a6dc32c6256
	574cb731a5119       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                         51 seconds ago       Running             nginx                     0                   7b1707092c6ef
	d1ae77c0be647       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   6180a19373d89
	bf63d1da4b296       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   e93c0de1b0e28
	864845d1073e2       fb73e92641fd5                                                                                         About a minute ago   Running             kube-proxy                2                   625d52028573b
	225e1e04cc5d1       24bc64e911039                                                                                         About a minute ago   Running             etcd                      2                   97ce1d10f82b6
	c3d6a5cad4f71       bcb9e554eaab6                                                                                         About a minute ago   Running             kube-scheduler            2                   5d874e6f43a7f
	58a48ed48245c       39dfb036b0986                                                                                         About a minute ago   Running             kube-apiserver            0                   2abad1f2e2ce7
	c54bd2e69057f       ab3683b584ae5                                                                                         About a minute ago   Running             kube-controller-manager   2                   5d9e2e1b30eb4
	ac7e40632c14a       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   8474d7f3cd691
	f9e1f7d391f97       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   a57a9bbff6822
	c3051da0ef3d6       fb73e92641fd5                                                                                         2 minutes ago        Exited              kube-proxy                1                   44a9bf9f9ffe9
	e753e3a1898ea       24bc64e911039                                                                                         2 minutes ago        Exited              etcd                      1                   5a3afaf2bed33
	f89308e3ec554       bcb9e554eaab6                                                                                         2 minutes ago        Exited              kube-scheduler            1                   30e9ad4bc73b0
	4ba88b5fe6f6f       ab3683b584ae5                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   44df82c738d44
	
	* 
	* ==> coredns [bf63d1da4b29] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36816 - 2873 "HINFO IN 5651943596483520199.1808248569975607576. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004809926s
	[INFO] 10.244.0.1:17823 - 26996 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000110664s
	[INFO] 10.244.0.1:10513 - 9523 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000103289s
	[INFO] 10.244.0.1:60485 - 41675 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000038374s
	[INFO] 10.244.0.1:27373 - 24429 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.0009491s
	[INFO] 10.244.0.1:23041 - 49519 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000087331s
	[INFO] 10.244.0.1:41541 - 61887 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00012958s
	
	* 
	* ==> coredns [f9e1f7d391f9] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53383 - 41713 "HINFO IN 4897703405314902568.8859652695471047690. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004365612s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-802000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-802000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b
	                    minikube.k8s.io/name=functional-802000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T10_58_26_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 17:58:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-802000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 18:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 18:01:29 +0000   Thu, 06 Jul 2023 17:58:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 18:01:29 +0000   Thu, 06 Jul 2023 17:58:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 18:01:29 +0000   Thu, 06 Jul 2023 17:58:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 18:01:29 +0000   Thu, 06 Jul 2023 17:58:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-802000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 175a14efbcdc4171a2b2b5ee6bef0b34
	  System UUID:                175a14efbcdc4171a2b2b5ee6bef0b34
	  Boot ID:                    5885ef77-3f1c-4528-892f-4783a84d8b88
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-wkkbh                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     hello-node-connect-58d66798bb-lvlqs           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-5d78c9869d-zs92j                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m6s
	  kube-system                 etcd-functional-802000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-apiserver-functional-802000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-functional-802000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-proxy-k658j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-scheduler-functional-802000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-5dd9cbfd69-zj46r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-468xz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m5s                   kube-proxy       
	  Normal   Starting                 76s                    kube-proxy       
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   Starting                 3m25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m25s (x8 over 3m25s)  kubelet          Node functional-802000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m25s (x8 over 3m25s)  kubelet          Node functional-802000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m25s (x7 over 3m25s)  kubelet          Node functional-802000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m20s                  kubelet          Node functional-802000 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    3m20s                  kubelet          Node functional-802000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m20s                  kubelet          Node functional-802000 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m20s                  kubelet          Starting kubelet.
	  Normal   NodeReady                3m16s                  kubelet          Node functional-802000 status is now: NodeReady
	  Normal   RegisteredNode           3m8s                   node-controller  Node functional-802000 event: Registered Node functional-802000 in Controller
	  Warning  ContainerGCFailed        2m20s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           112s                   node-controller  Node functional-802000 event: Registered Node functional-802000 in Controller
	  Normal   Starting                 81s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  81s (x8 over 81s)      kubelet          Node functional-802000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s (x8 over 81s)      kubelet          Node functional-802000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s (x7 over 81s)      kubelet          Node functional-802000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  81s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           66s                    node-controller  Node functional-802000 event: Registered Node functional-802000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.143703] systemd-fstab-generator[4321]: Ignoring "noauto" for root device
	[  +0.098488] systemd-fstab-generator[4332]: Ignoring "noauto" for root device
	[  +0.103685] systemd-fstab-generator[4345]: Ignoring "noauto" for root device
	[ +11.420498] systemd-fstab-generator[4906]: Ignoring "noauto" for root device
	[  +0.080021] systemd-fstab-generator[4917]: Ignoring "noauto" for root device
	[  +0.083911] systemd-fstab-generator[4928]: Ignoring "noauto" for root device
	[  +0.079710] systemd-fstab-generator[4939]: Ignoring "noauto" for root device
	[  +0.092043] systemd-fstab-generator[5011]: Ignoring "noauto" for root device
	[  +7.459384] kauditd_printk_skb: 34 callbacks suppressed
	[Jul 6 18:00] systemd-fstab-generator[6784]: Ignoring "noauto" for root device
	[  +0.156055] systemd-fstab-generator[6823]: Ignoring "noauto" for root device
	[  +0.098033] systemd-fstab-generator[6834]: Ignoring "noauto" for root device
	[  +0.103408] systemd-fstab-generator[6847]: Ignoring "noauto" for root device
	[ +11.513982] systemd-fstab-generator[7400]: Ignoring "noauto" for root device
	[  +0.078338] systemd-fstab-generator[7411]: Ignoring "noauto" for root device
	[  +0.081097] systemd-fstab-generator[7422]: Ignoring "noauto" for root device
	[  +0.081621] systemd-fstab-generator[7433]: Ignoring "noauto" for root device
	[  +0.087316] systemd-fstab-generator[7504]: Ignoring "noauto" for root device
	[  +0.896471] systemd-fstab-generator[7752]: Ignoring "noauto" for root device
	[  +4.626317] kauditd_printk_skb: 29 callbacks suppressed
	[ +22.611903] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.410644] kauditd_printk_skb: 1 callbacks suppressed
	[Jul 6 18:01] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +12.723686] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.011796] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [225e1e04cc5d] <==
	* {"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-06T18:00:26.100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-07-06T18:00:26.101Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-07-06T18:00:26.101Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T18:00:26.101Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-07-06T18:00:27.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-07-06T18:00:27.463Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-802000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-06T18:00:27.463Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T18:00:27.463Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T18:00:27.464Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-06T18:00:27.464Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T18:00:27.466Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-06T18:00:27.466Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	
	* 
	* ==> etcd [e753e3a1898e] <==
	* {"level":"info","ts":"2023-07-06T17:59:39.887Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T17:59:39.887Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T17:59:39.887Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-06T17:59:39.887Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-06T17:59:39.887Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-07-06T17:59:41.654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-06T17:59:41.657Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T17:59:41.657Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-802000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-06T17:59:41.657Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T17:59:41.660Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-06T17:59:41.661Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T17:59:41.661Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-06T17:59:41.660Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-07-06T18:00:12.319Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-06T18:00:12.319Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-802000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-07-06T18:00:12.330Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-07-06T18:00:12.331Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-06T18:00:12.333Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-06T18:00:12.333Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-802000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  18:01:46 up 3 min,  0 users,  load average: 0.52, 0.37, 0.16
	Linux functional-802000 5.10.57 #1 SMP PREEMPT Fri Jun 30 18:49:58 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [58a48ed48245] <==
	* I0706 18:00:28.188957       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0706 18:00:28.189120       1 shared_informer.go:318] Caches are synced for configmaps
	I0706 18:00:28.189136       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0706 18:00:28.190633       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0706 18:00:28.190714       1 aggregator.go:152] initial CRD sync complete...
	I0706 18:00:28.190722       1 autoregister_controller.go:141] Starting autoregister controller
	I0706 18:00:28.190724       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0706 18:00:28.190726       1 cache.go:39] Caches are synced for autoregister controller
	I0706 18:00:28.968364       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 18:00:29.100061       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0706 18:00:29.718399       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 18:00:29.722289       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 18:00:29.739747       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0706 18:00:29.753414       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 18:00:29.758573       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0706 18:00:40.551917       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 18:00:40.611689       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0706 18:00:47.002369       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.105.185.56]
	I0706 18:00:52.188560       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.97.69.138]
	I0706 18:01:02.575999       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0706 18:01:02.619259       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.106.201.55]
	I0706 18:01:17.047006       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.109.253.184]
	I0706 18:01:45.367811       1 controller.go:624] quota admission added evaluator for: namespaces
	I0706 18:01:45.454676       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.213.191]
	I0706 18:01:45.473184       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.142.41]
	
	* 
	* ==> kube-controller-manager [4ba88b5fe6f6] <==
	* I0706 17:59:54.490209       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0706 17:59:54.490228       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-802000"
	I0706 17:59:54.490250       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0706 17:59:54.490266       1 taint_manager.go:211] "Sending events to api server"
	I0706 17:59:54.490339       1 event.go:307] "Event occurred" object="functional-802000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-802000 event: Registered Node functional-802000 in Controller"
	I0706 17:59:54.491435       1 shared_informer.go:318] Caches are synced for expand
	I0706 17:59:54.492876       1 shared_informer.go:318] Caches are synced for PVC protection
	I0706 17:59:54.498024       1 shared_informer.go:318] Caches are synced for namespace
	I0706 17:59:54.501249       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0706 17:59:54.508575       1 shared_informer.go:318] Caches are synced for TTL
	I0706 17:59:54.516803       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0706 17:59:54.516806       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 17:59:54.528174       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 17:59:54.572394       1 shared_informer.go:318] Caches are synced for deployment
	I0706 17:59:54.574545       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0706 17:59:54.574602       1 shared_informer.go:318] Caches are synced for disruption
	I0706 17:59:54.608443       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0706 17:59:54.608476       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0706 17:59:54.608503       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0706 17:59:54.608535       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0706 17:59:54.609633       1 shared_informer.go:318] Caches are synced for attach detach
	I0706 17:59:54.709733       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0706 17:59:55.032368       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 17:59:55.099960       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 17:59:55.100084       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [c54bd2e69057] <==
	* I0706 18:00:40.695924       1 shared_informer.go:318] Caches are synced for service account
	I0706 18:00:41.018368       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 18:00:41.099408       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 18:00:41.099508       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0706 18:00:57.060714       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0706 18:01:02.578421       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0706 18:01:02.586753       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-lvlqs"
	I0706 18:01:17.005515       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0706 18:01:17.010136       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-wkkbh"
	I0706 18:01:45.388072       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5dd9cbfd69 to 1"
	I0706 18:01:45.393401       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0706 18:01:45.399619       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0706 18:01:45.400988       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5c5cfc8747 to 1"
	E0706 18:01:45.403055       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0706 18:01:45.403280       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0706 18:01:45.406764       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0706 18:01:45.407263       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0706 18:01:45.407284       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0706 18:01:45.409837       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0706 18:01:45.414102       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0706 18:01:45.414147       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0706 18:01:45.417966       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0706 18:01:45.417989       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0706 18:01:45.429823       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5dd9cbfd69-zj46r"
	I0706 18:01:45.443426       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5c5cfc8747-468xz"
	
	* 
	* ==> kube-proxy [864845d1073e] <==
	* I0706 18:00:29.770145       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0706 18:00:29.770327       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0706 18:00:29.770357       1 server_others.go:554] "Using iptables proxy"
	I0706 18:00:29.778534       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 18:00:29.778545       1 server_others.go:192] "Using iptables Proxier"
	I0706 18:00:29.778602       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 18:00:29.778846       1 server.go:658] "Version info" version="v1.27.3"
	I0706 18:00:29.778853       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 18:00:29.779169       1 config.go:188] "Starting service config controller"
	I0706 18:00:29.779181       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 18:00:29.779189       1 config.go:97] "Starting endpoint slice config controller"
	I0706 18:00:29.779191       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 18:00:29.779433       1 config.go:315] "Starting node config controller"
	I0706 18:00:29.779436       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 18:00:29.879828       1 shared_informer.go:318] Caches are synced for node config
	I0706 18:00:29.879841       1 shared_informer.go:318] Caches are synced for service config
	I0706 18:00:29.879852       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [c3051da0ef3d] <==
	* I0706 17:59:42.365926       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0706 17:59:42.365994       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0706 17:59:42.366006       1 server_others.go:554] "Using iptables proxy"
	I0706 17:59:42.380191       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 17:59:42.380258       1 server_others.go:192] "Using iptables Proxier"
	I0706 17:59:42.380313       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 17:59:42.380589       1 server.go:658] "Version info" version="v1.27.3"
	I0706 17:59:42.380593       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 17:59:42.381184       1 config.go:188] "Starting service config controller"
	I0706 17:59:42.382459       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 17:59:42.382510       1 config.go:97] "Starting endpoint slice config controller"
	I0706 17:59:42.382542       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 17:59:42.382889       1 config.go:315] "Starting node config controller"
	I0706 17:59:42.382896       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 17:59:42.483043       1 shared_informer.go:318] Caches are synced for node config
	I0706 17:59:42.483044       1 shared_informer.go:318] Caches are synced for service config
	I0706 17:59:42.483061       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c3d6a5cad4f7] <==
	* I0706 18:00:26.545463       1 serving.go:348] Generated self-signed cert in-memory
	W0706 18:00:28.118844       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0706 18:00:28.118856       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0706 18:00:28.118861       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0706 18:00:28.118864       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0706 18:00:28.154279       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0706 18:00:28.154293       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 18:00:28.155617       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0706 18:00:28.156520       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0706 18:00:28.156576       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 18:00:28.156591       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0706 18:00:28.257281       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f89308e3ec55] <==
	* I0706 17:59:40.302498       1 serving.go:348] Generated self-signed cert in-memory
	W0706 17:59:42.335373       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0706 17:59:42.335492       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0706 17:59:42.335519       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0706 17:59:42.335534       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0706 17:59:42.346699       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0706 17:59:42.346713       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 17:59:42.347297       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0706 17:59:42.347327       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 17:59:42.347740       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0706 17:59:42.347766       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0706 17:59:42.448254       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 18:00:12.341612       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0706 18:00:12.341634       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0706 18:00:12.341696       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0706 18:00:12.341794       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0706 18:00:12.341813       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 17:58:09 UTC, ends at Thu 2023-07-06 18:01:46 UTC. --
	Jul 06 18:01:26 functional-802000 kubelet[7758]: I0706 18:01:26.130793    7758 scope.go:115] "RemoveContainer" containerID="c0e3dead5010d5cd2f8aee30f025375341e265d12e18d49c85b689ec680d9668"
	Jul 06 18:01:26 functional-802000 kubelet[7758]: E0706 18:01:26.130879    7758 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-lvlqs_default(a16de9b3-1485-4c17-a638-a2cfe81cd7be)\"" pod="default/hello-node-connect-58d66798bb-lvlqs" podUID=a16de9b3-1485-4c17-a638-a2cfe81cd7be
	Jul 06 18:01:29 functional-802000 kubelet[7758]: I0706 18:01:29.369765    7758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdgpk\" (UniqueName: \"kubernetes.io/projected/832c1af1-3510-42dc-a4eb-8299b30dde4a-kube-api-access-cdgpk\") pod \"832c1af1-3510-42dc-a4eb-8299b30dde4a\" (UID: \"832c1af1-3510-42dc-a4eb-8299b30dde4a\") "
	Jul 06 18:01:29 functional-802000 kubelet[7758]: I0706 18:01:29.369800    7758 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/832c1af1-3510-42dc-a4eb-8299b30dde4a-test-volume\") pod \"832c1af1-3510-42dc-a4eb-8299b30dde4a\" (UID: \"832c1af1-3510-42dc-a4eb-8299b30dde4a\") "
	Jul 06 18:01:29 functional-802000 kubelet[7758]: I0706 18:01:29.369859    7758 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/832c1af1-3510-42dc-a4eb-8299b30dde4a-test-volume" (OuterVolumeSpecName: "test-volume") pod "832c1af1-3510-42dc-a4eb-8299b30dde4a" (UID: "832c1af1-3510-42dc-a4eb-8299b30dde4a"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 06 18:01:29 functional-802000 kubelet[7758]: I0706 18:01:29.370758    7758 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832c1af1-3510-42dc-a4eb-8299b30dde4a-kube-api-access-cdgpk" (OuterVolumeSpecName: "kube-api-access-cdgpk") pod "832c1af1-3510-42dc-a4eb-8299b30dde4a" (UID: "832c1af1-3510-42dc-a4eb-8299b30dde4a"). InnerVolumeSpecName "kube-api-access-cdgpk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 06 18:01:29 functional-802000 kubelet[7758]: I0706 18:01:29.469964    7758 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cdgpk\" (UniqueName: \"kubernetes.io/projected/832c1af1-3510-42dc-a4eb-8299b30dde4a-kube-api-access-cdgpk\") on node \"functional-802000\" DevicePath \"\""
	Jul 06 18:01:29 functional-802000 kubelet[7758]: I0706 18:01:29.469990    7758 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/832c1af1-3510-42dc-a4eb-8299b30dde4a-test-volume\") on node \"functional-802000\" DevicePath \"\""
	Jul 06 18:01:30 functional-802000 kubelet[7758]: I0706 18:01:30.115867    7758 scope.go:115] "RemoveContainer" containerID="437f55e07489a0e571ca0c18b3fe92d3310b1b3fdd8b9dcb04e6b29980c77bcd"
	Jul 06 18:01:30 functional-802000 kubelet[7758]: I0706 18:01:30.175282    7758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a00898fbd9636b458e5f5931953b77b124193089b11cee92b3cd711f7d49e2"
	Jul 06 18:01:31 functional-802000 kubelet[7758]: I0706 18:01:31.207746    7758 scope.go:115] "RemoveContainer" containerID="437f55e07489a0e571ca0c18b3fe92d3310b1b3fdd8b9dcb04e6b29980c77bcd"
	Jul 06 18:01:31 functional-802000 kubelet[7758]: I0706 18:01:31.207867    7758 scope.go:115] "RemoveContainer" containerID="a9d93da844b404ec196fd4e13e1efdd2c43cff7f35b8fd69abe93dbde63c15ee"
	Jul 06 18:01:31 functional-802000 kubelet[7758]: E0706 18:01:31.207949    7758 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-wkkbh_default(b8c37d75-17c7-478d-8c03-4d6a2dbae92e)\"" pod="default/hello-node-7b684b55f9-wkkbh" podUID=b8c37d75-17c7-478d-8c03-4d6a2dbae92e
	Jul 06 18:01:37 functional-802000 kubelet[7758]: I0706 18:01:37.117193    7758 scope.go:115] "RemoveContainer" containerID="c0e3dead5010d5cd2f8aee30f025375341e265d12e18d49c85b689ec680d9668"
	Jul 06 18:01:37 functional-802000 kubelet[7758]: E0706 18:01:37.117698    7758 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-lvlqs_default(a16de9b3-1485-4c17-a638-a2cfe81cd7be)\"" pod="default/hello-node-connect-58d66798bb-lvlqs" podUID=a16de9b3-1485-4c17-a638-a2cfe81cd7be
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.433917    7758 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: E0706 18:01:45.433974    7758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="832c1af1-3510-42dc-a4eb-8299b30dde4a" containerName="mount-munger"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.433989    7758 memory_manager.go:346] "RemoveStaleState removing state" podUID="832c1af1-3510-42dc-a4eb-8299b30dde4a" containerName="mount-munger"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.447705    7758 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.577753    7758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d9596d1a-a994-4344-b156-eecc8e838364-tmp-volume\") pod \"kubernetes-dashboard-5c5cfc8747-468xz\" (UID: \"d9596d1a-a994-4344-b156-eecc8e838364\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-468xz"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.577783    7758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8d99b1a4-7c64-4d4b-b89c-3976ce7146a8-tmp-volume\") pod \"dashboard-metrics-scraper-5dd9cbfd69-zj46r\" (UID: \"8d99b1a4-7c64-4d4b-b89c-3976ce7146a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-zj46r"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.577797    7758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlcxj\" (UniqueName: \"kubernetes.io/projected/8d99b1a4-7c64-4d4b-b89c-3976ce7146a8-kube-api-access-jlcxj\") pod \"dashboard-metrics-scraper-5dd9cbfd69-zj46r\" (UID: \"8d99b1a4-7c64-4d4b-b89c-3976ce7146a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-zj46r"
	Jul 06 18:01:45 functional-802000 kubelet[7758]: I0706 18:01:45.577809    7758 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5df\" (UniqueName: \"kubernetes.io/projected/d9596d1a-a994-4344-b156-eecc8e838364-kube-api-access-2q5df\") pod \"kubernetes-dashboard-5c5cfc8747-468xz\" (UID: \"d9596d1a-a994-4344-b156-eecc8e838364\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-468xz"
	Jul 06 18:01:46 functional-802000 kubelet[7758]: I0706 18:01:46.116136    7758 scope.go:115] "RemoveContainer" containerID="a9d93da844b404ec196fd4e13e1efdd2c43cff7f35b8fd69abe93dbde63c15ee"
	Jul 06 18:01:46 functional-802000 kubelet[7758]: E0706 18:01:46.116227    7758 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-wkkbh_default(b8c37d75-17c7-478d-8c03-4d6a2dbae92e)\"" pod="default/hello-node-7b684b55f9-wkkbh" podUID=b8c37d75-17c7-478d-8c03-4d6a2dbae92e
	
	* 
	* ==> storage-provisioner [ac7e40632c14] <==
	* I0706 17:59:40.128432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0706 17:59:42.366449       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0706 17:59:42.366771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0706 17:59:59.771584       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0706 17:59:59.771744       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-802000_5180b804-3640-4395-b7b6-8b398f2225e6!
	I0706 17:59:59.772107       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98b26e21-84df-4efb-8e6e-d4674358ea05", APIVersion:"v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-802000_5180b804-3640-4395-b7b6-8b398f2225e6 became leader
	I0706 17:59:59.872170       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-802000_5180b804-3640-4395-b7b6-8b398f2225e6!
	
	* 
	* ==> storage-provisioner [d1ae77c0be64] <==
	* I0706 18:00:29.709001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0706 18:00:29.740703       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0706 18:00:29.740725       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0706 18:00:47.137595       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0706 18:00:47.137804       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-802000_f5160011-8c44-48d0-845b-b1f01ef05a0a!
	I0706 18:00:47.138277       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98b26e21-84df-4efb-8e6e-d4674358ea05", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-802000_f5160011-8c44-48d0-845b-b1f01ef05a0a became leader
	I0706 18:00:47.238404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-802000_f5160011-8c44-48d0-845b-b1f01ef05a0a!
	I0706 18:00:57.061597       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0706 18:00:57.061747       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9239b1a1-3696-4a0b-809d-d7156e37967b 391 0 2023-07-06 17:58:40 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-07-06 17:58:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-5d440432-b734-4876-b41e-7015c7369c43 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  5d440432-b734-4876-b41e-7015c7369c43 722 0 2023-07-06 18:00:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-07-06 18:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-07-06 18:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0706 18:00:57.062231       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-5d440432-b734-4876-b41e-7015c7369c43" provisioned
	I0706 18:00:57.062264       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0706 18:00:57.062309       1 volume_store.go:212] Trying to save persistentvolume "pvc-5d440432-b734-4876-b41e-7015c7369c43"
	I0706 18:00:57.062818       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5d440432-b734-4876-b41e-7015c7369c43", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0706 18:00:57.066311       1 volume_store.go:219] persistentvolume "pvc-5d440432-b734-4876-b41e-7015c7369c43" saved
	I0706 18:00:57.067661       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5d440432-b734-4876-b41e-7015c7369c43", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5d440432-b734-4876-b41e-7015c7369c43
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-802000 -n functional-802000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-802000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5dd9cbfd69-zj46r kubernetes-dashboard-5c5cfc8747-468xz
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-802000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-zj46r kubernetes-dashboard-5c5cfc8747-468xz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-802000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-zj46r kubernetes-dashboard-5c5cfc8747-468xz: exit status 1 (41.487042ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-802000/192.168.105.4
	Start Time:       Thu, 06 Jul 2023 11:01:25 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://476ef0c84ec6715378656623e0dd18f4611a252d51ba816d563886055c578863
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 06 Jul 2023 11:01:27 -0700
	      Finished:     Thu, 06 Jul 2023 11:01:27 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdgpk (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-cdgpk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  21s   default-scheduler  Successfully assigned default/busybox-mount to functional-802000
	  Normal  Pulling    20s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.728708008s (1.728717425s including waiting)
	  Normal  Created    19s   kubelet            Created container mount-munger
	  Normal  Started    19s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5dd9cbfd69-zj46r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-468xz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-802000 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-zj46r kubernetes-dashboard-5c5cfc8747-468xz: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (44.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-122000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-122000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 99554ca6fcaf
	Removing intermediate container 99554ca6fcaf
	 ---> 1b87697067f0
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1e0209f75b60
	Removing intermediate container 1e0209f75b60
	 ---> bff8c669e9b0
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 455e76b3d1f1
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-122000 -n image-122000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-122000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-802000 ssh                                    | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:00 PDT |                     |
	|                | sudo crictl inspecti                                     |                   |         |         |                     |                     |
	|                | registry.k8s.io/pause:latest                             |                   |         |         |                     |                     |
	| cache          | functional-802000 cache reload                           | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:00 PDT | 06 Jul 23 11:00 PDT |
	| ssh            | functional-802000 ssh                                    | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:00 PDT | 06 Jul 23 11:00 PDT |
	|                | sudo crictl inspecti                                     |                   |         |         |                     |                     |
	|                | registry.k8s.io/pause:latest                             |                   |         |         |                     |                     |
	| cache          | delete                                                   | minikube          | jenkins | v1.30.1 | 06 Jul 23 11:00 PDT | 06 Jul 23 11:00 PDT |
	|                | registry.k8s.io/pause:3.1                                |                   |         |         |                     |                     |
	| cache          | delete                                                   | minikube          | jenkins | v1.30.1 | 06 Jul 23 11:00 PDT | 06 Jul 23 11:00 PDT |
	|                | registry.k8s.io/pause:latest                             |                   |         |         |                     |                     |
	| image          | functional-802000 image rm                               | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-802000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-802000 ssh sudo cat                           | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | /etc/test/nested/copy/2465/hosts                         |                   |         |         |                     |                     |
	| image          | functional-802000 image ls                               | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	| image          | functional-802000 image load                             | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| update-context | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-802000 image ls                               | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	| image          | functional-802000 image save --daemon                    | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:02 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-802000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT |                     |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-802000 ssh pgrep                              | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-802000 image build -t                         | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | localhost/my-image:functional-802000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-802000 image ls                               | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	| delete         | -p functional-802000                                     | functional-802000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	| start          | -p image-122000 --driver=qemu2                           | image-122000      | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-122000      | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-122000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-122000      | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-122000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 11:02:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 11:02:03.619437    3303 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:02:03.619558    3303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:02:03.619559    3303 out.go:309] Setting ErrFile to fd 2...
	I0706 11:02:03.619561    3303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:02:03.619627    3303 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:02:03.620696    3303 out.go:303] Setting JSON to false
	I0706 11:02:03.636682    3303 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1895,"bootTime":1688664628,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:02:03.636741    3303 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:02:03.640882    3303 out.go:177] * [image-122000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:02:03.647834    3303 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:02:03.647869    3303 notify.go:220] Checking for updates...
	I0706 11:02:03.654854    3303 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:02:03.657885    3303 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:02:03.660916    3303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:02:03.663904    3303 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:02:03.666904    3303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:02:03.669871    3303 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:02:03.673771    3303 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:02:03.680845    3303 start.go:297] selected driver: qemu2
	I0706 11:02:03.680848    3303 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:02:03.680854    3303 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:02:03.680940    3303 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:02:03.683845    3303 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:02:03.689038    3303 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0706 11:02:03.689109    3303 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 11:02:03.689123    3303 cni.go:84] Creating CNI manager for ""
	I0706 11:02:03.689127    3303 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:02:03.689129    3303 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:02:03.689134    3303 start_flags.go:319] config:
	{Name:image-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:02:03.693181    3303 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:02:03.699869    3303 out.go:177] * Starting control plane node image-122000 in cluster image-122000
	I0706 11:02:03.703805    3303 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:02:03.703834    3303 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:02:03.703845    3303 cache.go:57] Caching tarball of preloaded images
	I0706 11:02:03.703908    3303 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:02:03.703911    3303 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:02:03.704103    3303 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/config.json ...
	I0706 11:02:03.704128    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/config.json: {Name:mkd546e06577c03e5f1e56ccee70e447dc115bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:03.704323    3303 start.go:365] acquiring machines lock for image-122000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:02:03.704349    3303 start.go:369] acquired machines lock for "image-122000" in 23.334µs
	I0706 11:02:03.704368    3303 start.go:93] Provisioning new machine with config: &{Name:image-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:image-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:02:03.704390    3303 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:02:03.711638    3303 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0706 11:02:03.732261    3303 start.go:159] libmachine.API.Create for "image-122000" (driver="qemu2")
	I0706 11:02:03.732283    3303 client.go:168] LocalClient.Create starting
	I0706 11:02:03.732340    3303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:02:03.732358    3303 main.go:141] libmachine: Decoding PEM data...
	I0706 11:02:03.732365    3303 main.go:141] libmachine: Parsing certificate...
	I0706 11:02:03.732393    3303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:02:03.732408    3303 main.go:141] libmachine: Decoding PEM data...
	I0706 11:02:03.732414    3303 main.go:141] libmachine: Parsing certificate...
	I0706 11:02:03.732726    3303 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:02:03.898379    3303 main.go:141] libmachine: Creating SSH key...
	I0706 11:02:04.046904    3303 main.go:141] libmachine: Creating Disk image...
	I0706 11:02:04.046910    3303 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:02:04.047074    3303 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/disk.qcow2
	I0706 11:02:04.056127    3303 main.go:141] libmachine: STDOUT: 
	I0706 11:02:04.056141    3303 main.go:141] libmachine: STDERR: 
	I0706 11:02:04.056189    3303 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/disk.qcow2 +20000M
	I0706 11:02:04.063403    3303 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:02:04.063433    3303 main.go:141] libmachine: STDERR: 
	I0706 11:02:04.063448    3303 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/disk.qcow2
	I0706 11:02:04.063451    3303 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:02:04.063490    3303 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:c8:ba:21:05:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/disk.qcow2
	I0706 11:02:04.097867    3303 main.go:141] libmachine: STDOUT: 
	I0706 11:02:04.097891    3303 main.go:141] libmachine: STDERR: 
	I0706 11:02:04.097894    3303 main.go:141] libmachine: Attempt 0
	I0706 11:02:04.097909    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:04.098006    3303 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0706 11:02:04.098028    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:04.098034    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:04.098039    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:06.100168    3303 main.go:141] libmachine: Attempt 1
	I0706 11:02:06.100274    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:06.100509    3303 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0706 11:02:06.100550    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:06.100576    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:06.100630    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:08.102773    3303 main.go:141] libmachine: Attempt 2
	I0706 11:02:08.102789    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:08.102876    3303 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0706 11:02:08.102896    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:08.102901    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:08.102906    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:10.104926    3303 main.go:141] libmachine: Attempt 3
	I0706 11:02:10.104931    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:10.104959    3303 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0706 11:02:10.104970    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:10.104975    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:10.104979    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:12.107019    3303 main.go:141] libmachine: Attempt 4
	I0706 11:02:12.107033    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:12.107115    3303 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0706 11:02:12.107124    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:12.107129    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:12.107133    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:14.109214    3303 main.go:141] libmachine: Attempt 5
	I0706 11:02:14.109227    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:14.109323    3303 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0706 11:02:14.109331    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:14.109335    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:14.109339    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:16.111427    3303 main.go:141] libmachine: Attempt 6
	I0706 11:02:16.111453    3303 main.go:141] libmachine: Searching for 4e:c8:ba:21:5:d in /var/db/dhcpd_leases ...
	I0706 11:02:16.111645    3303 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:16.111666    3303 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:16.111673    3303 main.go:141] libmachine: Found match: 4e:c8:ba:21:5:d
	I0706 11:02:16.111690    3303 main.go:141] libmachine: IP: 192.168.105.5
	I0706 11:02:16.111699    3303 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0706 11:02:18.132683    3303 machine.go:88] provisioning docker machine ...
	I0706 11:02:18.132742    3303 buildroot.go:166] provisioning hostname "image-122000"
	I0706 11:02:18.133002    3303 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:18.134011    3303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c091e0] 0x102c0bc40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0706 11:02:18.134027    3303 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-122000 && echo "image-122000" | sudo tee /etc/hostname
	I0706 11:02:18.229619    3303 main.go:141] libmachine: SSH cmd err, output: <nil>: image-122000
	
	I0706 11:02:18.229743    3303 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:18.230257    3303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c091e0] 0x102c0bc40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0706 11:02:18.230271    3303 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-122000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-122000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-122000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 11:02:18.306300    3303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 11:02:18.306319    3303 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1247/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1247/.minikube}
	I0706 11:02:18.306330    3303 buildroot.go:174] setting up certificates
	I0706 11:02:18.306337    3303 provision.go:83] configureAuth start
	I0706 11:02:18.306341    3303 provision.go:138] copyHostCerts
	I0706 11:02:18.306492    3303 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem, removing ...
	I0706 11:02:18.306499    3303 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem
	I0706 11:02:18.306685    3303 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem (1078 bytes)
	I0706 11:02:18.306973    3303 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem, removing ...
	I0706 11:02:18.306976    3303 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem
	I0706 11:02:18.307042    3303 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem (1123 bytes)
	I0706 11:02:18.307196    3303 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem, removing ...
	I0706 11:02:18.307199    3303 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem
	I0706 11:02:18.307256    3303 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem (1675 bytes)
	I0706 11:02:18.307390    3303 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem org=jenkins.image-122000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-122000]
	I0706 11:02:18.558468    3303 provision.go:172] copyRemoteCerts
	I0706 11:02:18.558513    3303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 11:02:18.558522    3303 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/id_rsa Username:docker}
	I0706 11:02:18.591697    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 11:02:18.598629    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0706 11:02:18.605428    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 11:02:18.612724    3303 provision.go:86] duration metric: configureAuth took 306.376625ms
	I0706 11:02:18.612730    3303 buildroot.go:189] setting minikube options for container-runtime
	I0706 11:02:18.612851    3303 config.go:182] Loaded profile config "image-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:02:18.612887    3303 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:18.613108    3303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c091e0] 0x102c0bc40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0706 11:02:18.613111    3303 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 11:02:18.678457    3303 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 11:02:18.678462    3303 buildroot.go:70] root file system type: tmpfs
	I0706 11:02:18.678516    3303 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 11:02:18.678573    3303 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:18.678824    3303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c091e0] 0x102c0bc40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0706 11:02:18.678862    3303 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 11:02:18.746470    3303 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 11:02:18.746527    3303 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:18.746787    3303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c091e0] 0x102c0bc40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0706 11:02:18.746795    3303 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 11:02:19.102547    3303 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 11:02:19.102555    3303 machine.go:91] provisioned docker machine in 969.85925ms
	I0706 11:02:19.102566    3303 client.go:171] LocalClient.Create took 15.370323417s
	I0706 11:02:19.102573    3303 start.go:167] duration metric: libmachine.API.Create for "image-122000" took 15.37036475s
	I0706 11:02:19.102576    3303 start.go:300] post-start starting for "image-122000" (driver="qemu2")
	I0706 11:02:19.102580    3303 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 11:02:19.102649    3303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 11:02:19.102656    3303 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/id_rsa Username:docker}
	I0706 11:02:19.135886    3303 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 11:02:19.137430    3303 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 11:02:19.137434    3303 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1247/.minikube/addons for local assets ...
	I0706 11:02:19.137502    3303 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1247/.minikube/files for local assets ...
	I0706 11:02:19.137607    3303 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem -> 24652.pem in /etc/ssl/certs
	I0706 11:02:19.137713    3303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 11:02:19.140746    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem --> /etc/ssl/certs/24652.pem (1708 bytes)
	I0706 11:02:19.148304    3303 start.go:303] post-start completed in 45.724084ms
	I0706 11:02:19.148683    3303 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/config.json ...
	I0706 11:02:19.148843    3303 start.go:128] duration metric: createHost completed in 15.444500291s
	I0706 11:02:19.148872    3303 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:19.149089    3303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c091e0] 0x102c0bc40 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0706 11:02:19.149092    3303 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 11:02:19.210275    3303 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688666539.682765919
	
	I0706 11:02:19.210278    3303 fix.go:206] guest clock: 1688666539.682765919
	I0706 11:02:19.210282    3303 fix.go:219] Guest: 2023-07-06 11:02:19.682765919 -0700 PDT Remote: 2023-07-06 11:02:19.148844 -0700 PDT m=+15.549585292 (delta=533.921919ms)
	I0706 11:02:19.210290    3303 fix.go:190] guest clock delta is within tolerance: 533.921919ms
	I0706 11:02:19.210292    3303 start.go:83] releasing machines lock for "image-122000", held for 15.505990584s
	I0706 11:02:19.210591    3303 ssh_runner.go:195] Run: cat /version.json
	I0706 11:02:19.210596    3303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 11:02:19.210601    3303 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/id_rsa Username:docker}
	I0706 11:02:19.210612    3303 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/id_rsa Username:docker}
	I0706 11:02:19.245404    3303 ssh_runner.go:195] Run: systemctl --version
	I0706 11:02:19.287298    3303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 11:02:19.289051    3303 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 11:02:19.289080    3303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 11:02:19.294350    3303 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 11:02:19.294354    3303 start.go:466] detecting cgroup driver to use...
	I0706 11:02:19.294410    3303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 11:02:19.300176    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 11:02:19.303600    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 11:02:19.306554    3303 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 11:02:19.306591    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 11:02:19.309598    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 11:02:19.312945    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 11:02:19.316428    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 11:02:19.320185    3303 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 11:02:19.323521    3303 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 11:02:19.326407    3303 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 11:02:19.329187    3303 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 11:02:19.332537    3303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:19.408186    3303 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 11:02:19.415057    3303 start.go:466] detecting cgroup driver to use...
	I0706 11:02:19.415131    3303 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 11:02:19.421931    3303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 11:02:19.426940    3303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 11:02:19.435594    3303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 11:02:19.440578    3303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 11:02:19.445650    3303 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 11:02:19.488827    3303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 11:02:19.494390    3303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 11:02:19.499821    3303 ssh_runner.go:195] Run: which cri-dockerd
	I0706 11:02:19.501128    3303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 11:02:19.504095    3303 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 11:02:19.509051    3303 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 11:02:19.602509    3303 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 11:02:19.685828    3303 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 11:02:19.685837    3303 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 11:02:19.691072    3303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:19.773744    3303 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 11:02:20.930319    3303 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156566375s)
	I0706 11:02:20.930398    3303 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 11:02:21.002270    3303 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 11:02:21.074557    3303 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 11:02:21.153259    3303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:21.233582    3303 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 11:02:21.241412    3303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:21.324642    3303 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 11:02:21.347775    3303 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 11:02:21.347873    3303 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 11:02:21.350005    3303 start.go:534] Will wait 60s for crictl version
	I0706 11:02:21.350056    3303 ssh_runner.go:195] Run: which crictl
	I0706 11:02:21.351542    3303 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 11:02:21.368698    3303 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 11:02:21.368773    3303 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 11:02:21.378490    3303 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 11:02:21.394792    3303 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 11:02:21.394940    3303 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0706 11:02:21.396331    3303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 11:02:21.400268    3303 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:02:21.400307    3303 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 11:02:21.405382    3303 docker.go:636] Got preloaded images: 
	I0706 11:02:21.405386    3303 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0706 11:02:21.405419    3303 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 11:02:21.408812    3303 ssh_runner.go:195] Run: which lz4
	I0706 11:02:21.410087    3303 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0706 11:02:21.411312    3303 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0706 11:02:21.411326    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0706 11:02:22.695781    3303 docker.go:600] Took 1.285742 seconds to copy over tarball
	I0706 11:02:22.695833    3303 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0706 11:02:23.731983    3303 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036141458s)
	I0706 11:02:23.731998    3303 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0706 11:02:23.747903    3303 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 11:02:23.751252    3303 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0706 11:02:23.756236    3303 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:23.840062    3303 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 11:02:25.288934    3303 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.448864292s)
	I0706 11:02:25.289016    3303 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 11:02:25.294686    3303 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0706 11:02:25.294691    3303 cache_images.go:84] Images are preloaded, skipping loading
	I0706 11:02:25.294738    3303 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 11:02:25.302065    3303 cni.go:84] Creating CNI manager for ""
	I0706 11:02:25.302070    3303 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:02:25.302074    3303 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 11:02:25.302082    3303 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-122000 NodeName:image-122000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 11:02:25.302151    3303 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-122000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 11:02:25.302191    3303 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-122000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:image-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 11:02:25.302247    3303 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 11:02:25.305248    3303 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 11:02:25.305269    3303 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0706 11:02:25.308336    3303 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0706 11:02:25.313346    3303 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 11:02:25.318290    3303 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0706 11:02:25.322985    3303 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0706 11:02:25.324233    3303 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 11:02:25.328080    3303 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000 for IP: 192.168.105.5
	I0706 11:02:25.328088    3303 certs.go:190] acquiring lock for shared ca certs: {Name:mk763e62c6a9326245ca88f64c15681d0696aa38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.328220    3303 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key
	I0706 11:02:25.328257    3303 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key
	I0706 11:02:25.328280    3303 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/client.key
	I0706 11:02:25.328285    3303 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/client.crt with IP's: []
	I0706 11:02:25.382841    3303 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/client.crt ...
	I0706 11:02:25.382844    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/client.crt: {Name:mkfe62e683fb92c1bc09e39ea2b52c3690834850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.383047    3303 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/client.key ...
	I0706 11:02:25.383049    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/client.key: {Name:mkfc5848ea474facc691afcb18e41751e35d7cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.383159    3303 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.key.e69b33ca
	I0706 11:02:25.383164    3303 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0706 11:02:25.414902    3303 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.crt.e69b33ca ...
	I0706 11:02:25.414904    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.crt.e69b33ca: {Name:mkb27b9bcd343caccabc042962469db41dd6814b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.415046    3303 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.key.e69b33ca ...
	I0706 11:02:25.415048    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.key.e69b33ca: {Name:mk39e6103093f57025c53bf1a4be26556b9fa45e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.415149    3303 certs.go:337] copying /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.crt
	I0706 11:02:25.415242    3303 certs.go:341] copying /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.key
	I0706 11:02:25.415314    3303 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.key
	I0706 11:02:25.415319    3303 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.crt with IP's: []
	I0706 11:02:25.541321    3303 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.crt ...
	I0706 11:02:25.541324    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.crt: {Name:mk645c5875c6f32b89630af3de512f66d407bc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.541487    3303 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.key ...
	I0706 11:02:25.541488    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.key: {Name:mk0ce932af80a9f2abaae7a5533a5bbb7bddf512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:25.541722    3303 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465.pem (1338 bytes)
	W0706 11:02:25.541746    3303 certs.go:433] ignoring /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465_empty.pem, impossibly tiny 0 bytes
	I0706 11:02:25.541754    3303 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem (1675 bytes)
	I0706 11:02:25.541778    3303 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem (1078 bytes)
	I0706 11:02:25.541800    3303 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem (1123 bytes)
	I0706 11:02:25.541818    3303 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem (1675 bytes)
	I0706 11:02:25.541862    3303 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem (1708 bytes)
	I0706 11:02:25.542143    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0706 11:02:25.549170    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0706 11:02:25.556461    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0706 11:02:25.563536    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/image-122000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0706 11:02:25.570134    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 11:02:25.576630    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0706 11:02:25.583800    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 11:02:25.590641    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0706 11:02:25.597150    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 11:02:25.604477    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465.pem --> /usr/share/ca-certificates/2465.pem (1338 bytes)
	I0706 11:02:25.611570    3303 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem --> /usr/share/ca-certificates/24652.pem (1708 bytes)
	I0706 11:02:25.617995    3303 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0706 11:02:25.622817    3303 ssh_runner.go:195] Run: openssl version
	I0706 11:02:25.624679    3303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2465.pem && ln -fs /usr/share/ca-certificates/2465.pem /etc/ssl/certs/2465.pem"
	I0706 11:02:25.628023    3303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2465.pem
	I0706 11:02:25.629566    3303 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 17:57 /usr/share/ca-certificates/2465.pem
	I0706 11:02:25.629586    3303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2465.pem
	I0706 11:02:25.631583    3303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2465.pem /etc/ssl/certs/51391683.0"
	I0706 11:02:25.634481    3303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24652.pem && ln -fs /usr/share/ca-certificates/24652.pem /etc/ssl/certs/24652.pem"
	I0706 11:02:25.637495    3303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24652.pem
	I0706 11:02:25.638974    3303 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 17:57 /usr/share/ca-certificates/24652.pem
	I0706 11:02:25.638997    3303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24652.pem
	I0706 11:02:25.640677    3303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24652.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 11:02:25.643943    3303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 11:02:25.646813    3303 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:02:25.648328    3303 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:02:25.648344    3303 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:02:25.650298    3303 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 11:02:25.653558    3303 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 11:02:25.654868    3303 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 11:02:25.654895    3303 kubeadm.go:404] StartCluster: {Name:image-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.3 ClusterName:image-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:02:25.654969    3303 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 11:02:25.666965    3303 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0706 11:02:25.669866    3303 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 11:02:25.672810    3303 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 11:02:25.675604    3303 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 11:02:25.675614    3303 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0706 11:02:25.698824    3303 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0706 11:02:25.698846    3303 kubeadm.go:322] [preflight] Running pre-flight checks
	I0706 11:02:25.760316    3303 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0706 11:02:25.760380    3303 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0706 11:02:25.760444    3303 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0706 11:02:25.816579    3303 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0706 11:02:25.825752    3303 out.go:204]   - Generating certificates and keys ...
	I0706 11:02:25.825790    3303 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0706 11:02:25.825820    3303 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0706 11:02:25.973050    3303 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0706 11:02:26.017760    3303 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0706 11:02:26.071025    3303 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0706 11:02:26.129692    3303 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0706 11:02:26.446667    3303 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0706 11:02:26.446731    3303 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-122000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0706 11:02:26.577347    3303 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0706 11:02:26.577409    3303 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-122000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0706 11:02:26.634101    3303 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0706 11:02:26.747581    3303 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0706 11:02:26.788365    3303 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0706 11:02:26.788394    3303 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0706 11:02:26.903758    3303 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0706 11:02:26.951660    3303 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0706 11:02:27.002563    3303 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0706 11:02:27.077967    3303 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0706 11:02:27.084591    3303 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 11:02:27.084893    3303 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 11:02:27.084974    3303 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0706 11:02:27.178462    3303 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0706 11:02:27.181627    3303 out.go:204]   - Booting up control plane ...
	I0706 11:02:27.181672    3303 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0706 11:02:27.181704    3303 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0706 11:02:27.181733    3303 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0706 11:02:27.181786    3303 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0706 11:02:27.181897    3303 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0706 11:02:30.686426    3303 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.506168 seconds
	I0706 11:02:30.686485    3303 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0706 11:02:30.691680    3303 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0706 11:02:31.209758    3303 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0706 11:02:31.210014    3303 kubeadm.go:322] [mark-control-plane] Marking the node image-122000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0706 11:02:31.716004    3303 kubeadm.go:322] [bootstrap-token] Using token: hc5jea.9sliop3r5duyf8kl
	I0706 11:02:31.719118    3303 out.go:204]   - Configuring RBAC rules ...
	I0706 11:02:31.719185    3303 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0706 11:02:31.720541    3303 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0706 11:02:31.728026    3303 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0706 11:02:31.729229    3303 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0706 11:02:31.730670    3303 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0706 11:02:31.731779    3303 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0706 11:02:31.735798    3303 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0706 11:02:31.931535    3303 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0706 11:02:32.122286    3303 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0706 11:02:32.122755    3303 kubeadm.go:322] 
	I0706 11:02:32.122790    3303 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0706 11:02:32.122793    3303 kubeadm.go:322] 
	I0706 11:02:32.122844    3303 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0706 11:02:32.122847    3303 kubeadm.go:322] 
	I0706 11:02:32.122863    3303 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0706 11:02:32.122896    3303 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0706 11:02:32.122920    3303 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0706 11:02:32.122921    3303 kubeadm.go:322] 
	I0706 11:02:32.122948    3303 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0706 11:02:32.122950    3303 kubeadm.go:322] 
	I0706 11:02:32.122974    3303 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0706 11:02:32.122977    3303 kubeadm.go:322] 
	I0706 11:02:32.123004    3303 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0706 11:02:32.123052    3303 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0706 11:02:32.123093    3303 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0706 11:02:32.123095    3303 kubeadm.go:322] 
	I0706 11:02:32.123141    3303 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0706 11:02:32.123176    3303 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0706 11:02:32.123178    3303 kubeadm.go:322] 
	I0706 11:02:32.123219    3303 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hc5jea.9sliop3r5duyf8kl \
	I0706 11:02:32.123270    3303 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:54887cb817b031a56c6be5acb24737812f5477ec9674aeae1af9b05ae3868136 \
	I0706 11:02:32.123279    3303 kubeadm.go:322] 	--control-plane 
	I0706 11:02:32.123281    3303 kubeadm.go:322] 
	I0706 11:02:32.123320    3303 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0706 11:02:32.123322    3303 kubeadm.go:322] 
	I0706 11:02:32.123368    3303 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hc5jea.9sliop3r5duyf8kl \
	I0706 11:02:32.123435    3303 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:54887cb817b031a56c6be5acb24737812f5477ec9674aeae1af9b05ae3868136 
	I0706 11:02:32.123572    3303 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 11:02:32.123585    3303 cni.go:84] Creating CNI manager for ""
	I0706 11:02:32.123592    3303 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:02:32.131156    3303 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0706 11:02:32.136226    3303 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0706 11:02:32.139283    3303 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0706 11:02:32.143987    3303 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 11:02:32.144040    3303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:02:32.144073    3303 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b minikube.k8s.io/name=image-122000 minikube.k8s.io/updated_at=2023_07_06T11_02_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:02:32.211598    3303 kubeadm.go:1081] duration metric: took 67.592583ms to wait for elevateKubeSystemPrivileges.
	I0706 11:02:32.211629    3303 ops.go:34] apiserver oom_adj: -16
	I0706 11:02:32.211634    3303 kubeadm.go:406] StartCluster complete in 6.556761333s
	I0706 11:02:32.211644    3303 settings.go:142] acquiring lock: {Name:mk352fa14b583fbace5fdd55e6f9ba4f39f48007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:32.211742    3303 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:02:32.212076    3303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/kubeconfig: {Name:mk34623cbdb1646c9229359a97354a4ad80828c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:32.212290    3303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 11:02:32.212301    3303 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0706 11:02:32.212346    3303 addons.go:66] Setting storage-provisioner=true in profile "image-122000"
	I0706 11:02:32.212353    3303 addons.go:228] Setting addon storage-provisioner=true in "image-122000"
	I0706 11:02:32.212361    3303 addons.go:66] Setting default-storageclass=true in profile "image-122000"
	I0706 11:02:32.212365    3303 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-122000"
	I0706 11:02:32.212374    3303 host.go:66] Checking if "image-122000" exists ...
	I0706 11:02:32.212389    3303 config.go:182] Loaded profile config "image-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:02:32.218273    3303 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:02:32.221251    3303 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0706 11:02:32.221255    3303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0706 11:02:32.221262    3303 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/id_rsa Username:docker}
	I0706 11:02:32.226271    3303 addons.go:228] Setting addon default-storageclass=true in "image-122000"
	I0706 11:02:32.226284    3303 host.go:66] Checking if "image-122000" exists ...
	I0706 11:02:32.226976    3303 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0706 11:02:32.226980    3303 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0706 11:02:32.226985    3303 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/image-122000/id_rsa Username:docker}
	I0706 11:02:32.254045    3303 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0706 11:02:32.270430    3303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0706 11:02:32.277037    3303 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0706 11:02:32.661546    3303 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0706 11:02:32.730233    3303 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-122000" context rescaled to 1 replicas
	I0706 11:02:32.730247    3303 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:02:32.736509    3303 out.go:177] * Verifying Kubernetes components...
	I0706 11:02:32.747412    3303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 11:02:32.761372    3303 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0706 11:02:32.758123    3303 api_server.go:52] waiting for apiserver process to appear ...
	I0706 11:02:32.769512    3303 addons.go:499] enable addons completed in 557.209917ms: enabled=[storage-provisioner default-storageclass]
	I0706 11:02:32.769557    3303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 11:02:32.773700    3303 api_server.go:72] duration metric: took 43.443916ms to wait for apiserver process to appear ...
	I0706 11:02:32.773704    3303 api_server.go:88] waiting for apiserver healthz status ...
	I0706 11:02:32.773713    3303 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0706 11:02:32.777187    3303 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0706 11:02:32.777842    3303 api_server.go:141] control plane version: v1.27.3
	I0706 11:02:32.777845    3303 api_server.go:131] duration metric: took 4.14ms to wait for apiserver health ...
	I0706 11:02:32.777848    3303 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 11:02:32.780357    3303 system_pods.go:59] 5 kube-system pods found
	I0706 11:02:32.780362    3303 system_pods.go:61] "etcd-image-122000" [82386e22-b613-4b87-b91a-21da708c4f70] Pending
	I0706 11:02:32.780364    3303 system_pods.go:61] "kube-apiserver-image-122000" [30a9378d-cef4-49a8-a5da-a22a8b7f2ca1] Pending
	I0706 11:02:32.780366    3303 system_pods.go:61] "kube-controller-manager-image-122000" [0d250c98-1bca-4389-916e-c0809c5696be] Pending
	I0706 11:02:32.780367    3303 system_pods.go:61] "kube-scheduler-image-122000" [1563ff24-842f-4aa3-a167-e77b81c8d29f] Pending
	I0706 11:02:32.780369    3303 system_pods.go:61] "storage-provisioner" [b1301b56-c9bd-4aa0-880a-07d669783841] Pending
	I0706 11:02:32.780370    3303 system_pods.go:74] duration metric: took 2.520667ms to wait for pod list to return data ...
	I0706 11:02:32.780373    3303 kubeadm.go:581] duration metric: took 50.117625ms to wait for : map[apiserver:true system_pods:true] ...
	I0706 11:02:32.780378    3303 node_conditions.go:102] verifying NodePressure condition ...
	I0706 11:02:32.781639    3303 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0706 11:02:32.781646    3303 node_conditions.go:123] node cpu capacity is 2
	I0706 11:02:32.781650    3303 node_conditions.go:105] duration metric: took 1.270958ms to run NodePressure ...
	I0706 11:02:32.781654    3303 start.go:228] waiting for startup goroutines ...
	I0706 11:02:32.781656    3303 start.go:233] waiting for cluster config update ...
	I0706 11:02:32.781660    3303 start.go:242] writing updated cluster config ...
	I0706 11:02:32.781939    3303 ssh_runner.go:195] Run: rm -f paused
	I0706 11:02:32.808452    3303 start.go:642] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0706 11:02:32.812504    3303 out.go:177] * Done! kubectl is now configured to use "image-122000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 18:02:15 UTC, ends at Thu 2023-07-06 18:02:35 UTC. --
	Jul 06 18:02:28 image-122000 cri-dockerd[1055]: time="2023-07-06T18:02:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c5bc147d519236ae712ce579c0a7da17ef2ab22a950159df3ec0f478f456b47/resolv.conf as [nameserver 192.168.105.1]"
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.703085631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.703141256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.703155881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.703166590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.723351631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.723418631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.723431548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.723453673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:28 image-122000 cri-dockerd[1055]: time="2023-07-06T18:02:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/afc45411829a9465037b644b43a3da2dd2753f3279a3d65fccf0f6b5dec2f980/resolv.conf as [nameserver 192.168.105.1]"
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.780322715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.780444840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.780470298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:02:28 image-122000 dockerd[1163]: time="2023-07-06T18:02:28.780491548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:34 image-122000 dockerd[1157]: time="2023-07-06T18:02:34.926177218Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 06 18:02:35 image-122000 dockerd[1157]: time="2023-07-06T18:02:35.047806551Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 06 18:02:35 image-122000 dockerd[1157]: time="2023-07-06T18:02:35.063360634Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 06 18:02:35 image-122000 dockerd[1163]: time="2023-07-06T18:02:35.105544968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:02:35 image-122000 dockerd[1163]: time="2023-07-06T18:02:35.105573301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:35 image-122000 dockerd[1163]: time="2023-07-06T18:02:35.105585093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:02:35 image-122000 dockerd[1163]: time="2023-07-06T18:02:35.105589634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:02:34 image-122000 dockerd[1163]: time="2023-07-06T18:02:34.844695093Z" level=info msg="shim disconnected" id=455e76b3d1f1b0cff8620c882dacc32ac97396babe795f1ffd90cb12e84b1e44 namespace=moby
	Jul 06 18:02:34 image-122000 dockerd[1163]: time="2023-07-06T18:02:34.844725676Z" level=warning msg="cleaning up after shim disconnected" id=455e76b3d1f1b0cff8620c882dacc32ac97396babe795f1ffd90cb12e84b1e44 namespace=moby
	Jul 06 18:02:34 image-122000 dockerd[1163]: time="2023-07-06T18:02:34.844729843Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:02:34 image-122000 dockerd[1157]: time="2023-07-06T18:02:34.844885301Z" level=info msg="ignoring event" container=455e76b3d1f1b0cff8620c882dacc32ac97396babe795f1ffd90cb12e84b1e44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ff4986f615e47       bcb9e554eaab6       7 seconds ago       Running             kube-scheduler            0                   afc45411829a9
	8ae0d6d4acb2d       ab3683b584ae5       7 seconds ago       Running             kube-controller-manager   0                   0c5bc147d5192
	409d60a81eca8       39dfb036b0986       7 seconds ago       Running             kube-apiserver            0                   a3e89796d6c7f
	5bce1dd4a7409       24bc64e911039       7 seconds ago       Running             etcd                      0                   8d43c315b296f
	
	* 
	* ==> describe nodes <==
	* Name:               image-122000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-122000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b
	                    minikube.k8s.io/name=image-122000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T11_02_32_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 18:02:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-122000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 18:02:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 18:02:32 +0000   Thu, 06 Jul 2023 18:02:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 18:02:32 +0000   Thu, 06 Jul 2023 18:02:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 18:02:32 +0000   Thu, 06 Jul 2023 18:02:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 06 Jul 2023 18:02:32 +0000   Thu, 06 Jul 2023 18:02:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-122000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 34c4be73802042009e3f3174f133d27f
	  System UUID:                34c4be73802042009e3f3174f133d27f
	  Boot ID:                    caef05ae-108a-49f1-a4e5-24a0fbee99d3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-122000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-122000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-122000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-122000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s (x8 over 8s)  kubelet  Node image-122000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 8s)  kubelet  Node image-122000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 8s)  kubelet  Node image-122000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-122000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-122000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-122000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Jul 6 18:02] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.662606] EINJ: EINJ table not found.
	[  +0.530407] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044956] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000868] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.283423] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.067097] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.450677] systemd-fstab-generator[756]: Ignoring "noauto" for root device
	[  +0.190900] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.084840] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.088198] systemd-fstab-generator[815]: Ignoring "noauto" for root device
	[  +1.227550] systemd-fstab-generator[972]: Ignoring "noauto" for root device
	[  +0.073538] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[  +0.078032] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +0.082676] systemd-fstab-generator[1005]: Ignoring "noauto" for root device
	[  +0.090285] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +2.516079] systemd-fstab-generator[1150]: Ignoring "noauto" for root device
	[  +1.434283] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.892892] systemd-fstab-generator[1480]: Ignoring "noauto" for root device
	[  +4.648836] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[  +2.823973] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [5bce1dd4a740] <==
	* {"level":"info","ts":"2023-07-06T18:02:28.798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-07-06T18:02:28.798Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-07-06T18:02:28.819Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T18:02:28.819Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T18:02:28.819Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-06T18:02:28.819Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-07-06T18:02:28.819Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-07-06T18:02:28.893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-122000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T18:02:28.895Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-07-06T18:02:28.896Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:02:35 up 0 min,  0 users,  load average: 0.44, 0.10, 0.03
	Linux image-122000 5.10.57 #1 SMP PREEMPT Fri Jun 30 18:49:58 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [409d60a81eca] <==
	* I0706 18:02:30.003850       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0706 18:02:30.003861       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0706 18:02:30.003902       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0706 18:02:30.003915       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0706 18:02:30.003926       1 shared_informer.go:318] Caches are synced for configmaps
	I0706 18:02:30.033828       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0706 18:02:30.052252       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0706 18:02:30.052413       1 aggregator.go:152] initial CRD sync complete...
	I0706 18:02:30.052445       1 autoregister_controller.go:141] Starting autoregister controller
	I0706 18:02:30.052483       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0706 18:02:30.052504       1 cache.go:39] Caches are synced for autoregister controller
	I0706 18:02:30.752532       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 18:02:30.910228       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0706 18:02:30.915114       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0706 18:02:30.915140       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0706 18:02:31.098087       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 18:02:31.108587       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0706 18:02:31.180691       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0706 18:02:31.182599       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0706 18:02:31.183040       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 18:02:31.184344       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0706 18:02:31.993699       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 18:02:32.398261       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 18:02:32.403262       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0706 18:02:32.408001       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [8ae0d6d4acb2] <==
	* I0706 18:02:29.115764       1 serving.go:348] Generated self-signed cert in-memory
	I0706 18:02:29.308720       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0706 18:02:29.308795       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 18:02:29.309430       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0706 18:02:29.309506       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0706 18:02:29.309930       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0706 18:02:29.309992       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0706 18:02:31.987141       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0706 18:02:31.995184       1 controllermanager.go:638] "Started controller" controller="endpoint"
	I0706 18:02:31.995270       1 endpoints_controller.go:172] Starting endpoint controller
	I0706 18:02:31.995278       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0706 18:02:31.999783       1 controllermanager.go:638] "Started controller" controller="cronjob"
	I0706 18:02:31.999863       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0706 18:02:31.999868       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0706 18:02:32.005637       1 controllermanager.go:638] "Started controller" controller="ttl"
	I0706 18:02:32.005723       1 ttl_controller.go:124] "Starting TTL controller"
	I0706 18:02:32.005737       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0706 18:02:32.009171       1 controllermanager.go:638] "Started controller" controller="bootstrapsigner"
	I0706 18:02:32.009275       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0706 18:02:32.087491       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [ff4986f615e4] <==
	* W0706 18:02:29.984443       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0706 18:02:29.984448       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0706 18:02:29.984458       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0706 18:02:29.984461       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0706 18:02:29.984473       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 18:02:29.984477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0706 18:02:29.984526       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0706 18:02:29.984530       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0706 18:02:29.984553       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0706 18:02:29.984558       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0706 18:02:29.984664       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0706 18:02:29.984700       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0706 18:02:30.878166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 18:02:30.878219       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0706 18:02:30.891393       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0706 18:02:30.891415       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0706 18:02:30.925936       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0706 18:02:30.926016       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0706 18:02:30.932887       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0706 18:02:30.932939       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0706 18:02:30.954628       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0706 18:02:30.954664       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0706 18:02:30.981476       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0706 18:02:30.981573       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0706 18:02:31.581691       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 18:02:15 UTC, ends at Thu 2023-07-06 18:02:35 UTC. --
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.564486    2341 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 18:02:32 image-122000 kubelet[2341]: E0706 18:02:32.569243    2341 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-122000\" already exists" pod="kube-system/kube-scheduler-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: E0706 18:02:32.569274    2341 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-122000\" already exists" pod="kube-system/kube-apiserver-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: E0706 18:02:32.570283    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 06 18:02:32 image-122000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 06 18:02:32 image-122000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 06 18:02:32 image-122000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744467    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/0d212d9dd7dc3961ce2f668037f4a8af-etcd-data\") pod \"etcd-image-122000\" (UID: \"0d212d9dd7dc3961ce2f668037f4a8af\") " pod="kube-system/etcd-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744486    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afc526e5c04650b8fd89fdae0fc2016f-ca-certs\") pod \"kube-apiserver-image-122000\" (UID: \"afc526e5c04650b8fd89fdae0fc2016f\") " pod="kube-system/kube-apiserver-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744497    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afc526e5c04650b8fd89fdae0fc2016f-k8s-certs\") pod \"kube-apiserver-image-122000\" (UID: \"afc526e5c04650b8fd89fdae0fc2016f\") " pod="kube-system/kube-apiserver-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744524    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9d1dda7f0fcab08de3afc7e2698ed3b-flexvolume-dir\") pod \"kube-controller-manager-image-122000\" (UID: \"c9d1dda7f0fcab08de3afc7e2698ed3b\") " pod="kube-system/kube-controller-manager-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744542    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9d1dda7f0fcab08de3afc7e2698ed3b-kubeconfig\") pod \"kube-controller-manager-image-122000\" (UID: \"c9d1dda7f0fcab08de3afc7e2698ed3b\") " pod="kube-system/kube-controller-manager-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744565    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/0d212d9dd7dc3961ce2f668037f4a8af-etcd-certs\") pod \"etcd-image-122000\" (UID: \"0d212d9dd7dc3961ce2f668037f4a8af\") " pod="kube-system/etcd-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744577    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afc526e5c04650b8fd89fdae0fc2016f-usr-share-ca-certificates\") pod \"kube-apiserver-image-122000\" (UID: \"afc526e5c04650b8fd89fdae0fc2016f\") " pod="kube-system/kube-apiserver-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744588    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9d1dda7f0fcab08de3afc7e2698ed3b-ca-certs\") pod \"kube-controller-manager-image-122000\" (UID: \"c9d1dda7f0fcab08de3afc7e2698ed3b\") " pod="kube-system/kube-controller-manager-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744596    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9d1dda7f0fcab08de3afc7e2698ed3b-k8s-certs\") pod \"kube-controller-manager-image-122000\" (UID: \"c9d1dda7f0fcab08de3afc7e2698ed3b\") " pod="kube-system/kube-controller-manager-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744613    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9d1dda7f0fcab08de3afc7e2698ed3b-usr-share-ca-certificates\") pod \"kube-controller-manager-image-122000\" (UID: \"c9d1dda7f0fcab08de3afc7e2698ed3b\") " pod="kube-system/kube-controller-manager-image-122000"
	Jul 06 18:02:32 image-122000 kubelet[2341]: I0706 18:02:32.744632    2341 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4525dc724fc5e2b0acb8cc001fea85da-kubeconfig\") pod \"kube-scheduler-image-122000\" (UID: \"4525dc724fc5e2b0acb8cc001fea85da\") " pod="kube-system/kube-scheduler-image-122000"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.429945    2341 apiserver.go:52] "Watching apiserver"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.444111    2341 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.449454    2341 reconciler.go:41] "Reconciler: start to sync state"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.503106    2341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-122000" podStartSLOduration=2.503079467 podCreationTimestamp="2023-07-06 18:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 18:02:33.497813384 +0000 UTC m=+1.116497751" watchObservedRunningTime="2023-07-06 18:02:33.503079467 +0000 UTC m=+1.121763793"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.506852    2341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-122000" podStartSLOduration=1.506834634 podCreationTimestamp="2023-07-06 18:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 18:02:33.503297384 +0000 UTC m=+1.121981751" watchObservedRunningTime="2023-07-06 18:02:33.506834634 +0000 UTC m=+1.125519001"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.511551    2341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-122000" podStartSLOduration=2.511015175 podCreationTimestamp="2023-07-06 18:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 18:02:33.507116425 +0000 UTC m=+1.125800793" watchObservedRunningTime="2023-07-06 18:02:33.511015175 +0000 UTC m=+1.129699543"
	Jul 06 18:02:33 image-122000 kubelet[2341]: I0706 18:02:33.511850    2341 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-122000" podStartSLOduration=1.511838092 podCreationTimestamp="2023-07-06 18:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 18:02:33.51097955 +0000 UTC m=+1.129663918" watchObservedRunningTime="2023-07-06 18:02:33.511838092 +0000 UTC m=+1.130522460"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-122000 -n image-122000
helpers_test.go:261: (dbg) Run:  kubectl --context image-122000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-122000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-122000 describe pod storage-provisioner: exit status 1 (38.719334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-122000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-946000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.943122459s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [37cbc639-61db-49d9-8bb9-75ee4a6e98bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [37cbc639-61db-49d9-8bb9-75ee4a6e98bd] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.010303625s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.04181675s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons disable ingress-dns --alsologtostderr -v=1: (6.951732292s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons disable ingress --alsologtostderr -v=1: (7.065627708s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-946000 -n ingress-addon-legacy-946000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-802000                                        | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | update-context                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |         |                     |                     |
	| update-context | functional-802000                                        | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	|                | update-context                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |         |                     |                     |
	| image          | functional-802000 image ls                               | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:01 PDT |
	| image          | functional-802000 image save --daemon                    | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:01 PDT | 06 Jul 23 11:02 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-802000 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | image ls --format short                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | image ls --format yaml                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT |                     |
	|                | image ls --format json                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-802000                                        | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | image ls --format table                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh            | functional-802000 ssh pgrep                              | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT |                     |
	|                | buildkitd                                                |                             |         |         |                     |                     |
	| image          | functional-802000 image build -t                         | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | localhost/my-image:functional-802000                     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image          | functional-802000 image ls                               | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	| delete         | -p functional-802000                                     | functional-802000           | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	| start          | -p image-122000 --driver=qemu2                           | image-122000                | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                |                                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-122000                | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|                | -p image-122000                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-122000                | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|                | image-122000                                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-122000                | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|                | image-122000                                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-122000                | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	|                | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|                | -p image-122000                                          |                             |         |         |                     |                     |
	| delete         | -p image-122000                                          | image-122000                | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:02 PDT |
	| start          | -p ingress-addon-legacy-946000                           | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:02 PDT | 06 Jul 23 11:03 PDT |
	|                | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-946000                              | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:03 PDT | 06 Jul 23 11:03 PDT |
	|                | addons enable ingress                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-946000                              | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:03 PDT | 06 Jul 23 11:03 PDT |
	|                | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-946000                              | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:04 PDT | 06 Jul 23 11:04 PDT |
	|                | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-946000 ip                           | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:04 PDT | 06 Jul 23 11:04 PDT |
	| addons         | ingress-addon-legacy-946000                              | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:04 PDT | 06 Jul 23 11:04 PDT |
	|                | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-946000                              | ingress-addon-legacy-946000 | jenkins | v1.30.1 | 06 Jul 23 11:04 PDT | 06 Jul 23 11:04 PDT |
	|                | addons disable ingress                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 11:02:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 11:02:35.968192    3346 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:02:35.968320    3346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:02:35.968323    3346 out.go:309] Setting ErrFile to fd 2...
	I0706 11:02:35.968326    3346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:02:35.968394    3346 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:02:35.969485    3346 out.go:303] Setting JSON to false
	I0706 11:02:35.985044    3346 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1927,"bootTime":1688664628,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:02:35.985109    3346 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:02:35.988872    3346 out.go:177] * [ingress-addon-legacy-946000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:02:35.994919    3346 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:02:35.994941    3346 notify.go:220] Checking for updates...
	I0706 11:02:35.997889    3346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:02:36.000920    3346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:02:36.003819    3346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:02:36.006832    3346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:02:36.009895    3346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:02:36.012975    3346 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:02:36.016872    3346 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:02:36.022731    3346 start.go:297] selected driver: qemu2
	I0706 11:02:36.022742    3346 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:02:36.022748    3346 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:02:36.024763    3346 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:02:36.027884    3346 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:02:36.030993    3346 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:02:36.031027    3346 cni.go:84] Creating CNI manager for ""
	I0706 11:02:36.031035    3346 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 11:02:36.031047    3346 start_flags.go:319] config:
	{Name:ingress-addon-legacy-946000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-946000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0}
	I0706 11:02:36.035298    3346 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:02:36.042934    3346 out.go:177] * Starting control plane node ingress-addon-legacy-946000 in cluster ingress-addon-legacy-946000
	I0706 11:02:36.046854    3346 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0706 11:02:36.099335    3346 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0706 11:02:36.099354    3346 cache.go:57] Caching tarball of preloaded images
	I0706 11:02:36.099525    3346 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0706 11:02:36.105892    3346 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0706 11:02:36.113884    3346 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0706 11:02:36.193275    3346 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0706 11:02:42.180660    3346 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0706 11:02:42.180801    3346 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0706 11:02:42.929150    3346 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0706 11:02:42.929327    3346 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/config.json ...
	I0706 11:02:42.929352    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/config.json: {Name:mk02a5295c5e36d27c7839c678b95da64b72eeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:02:42.929570    3346 start.go:365] acquiring machines lock for ingress-addon-legacy-946000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:02:42.929594    3346 start.go:369] acquired machines lock for "ingress-addon-legacy-946000" in 19.333µs
	I0706 11:02:42.929604    3346 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-946000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-946000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:02:42.929639    3346 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:02:42.940607    3346 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0706 11:02:42.955337    3346 start.go:159] libmachine.API.Create for "ingress-addon-legacy-946000" (driver="qemu2")
	I0706 11:02:42.955360    3346 client.go:168] LocalClient.Create starting
	I0706 11:02:42.955449    3346 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:02:42.955472    3346 main.go:141] libmachine: Decoding PEM data...
	I0706 11:02:42.955482    3346 main.go:141] libmachine: Parsing certificate...
	I0706 11:02:42.955527    3346 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:02:42.955541    3346 main.go:141] libmachine: Decoding PEM data...
	I0706 11:02:42.955550    3346 main.go:141] libmachine: Parsing certificate...
	I0706 11:02:42.955886    3346 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:02:43.095710    3346 main.go:141] libmachine: Creating SSH key...
	I0706 11:02:43.195735    3346 main.go:141] libmachine: Creating Disk image...
	I0706 11:02:43.195743    3346 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:02:43.195883    3346 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/disk.qcow2
	I0706 11:02:43.204612    3346 main.go:141] libmachine: STDOUT: 
	I0706 11:02:43.204623    3346 main.go:141] libmachine: STDERR: 
	I0706 11:02:43.204696    3346 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/disk.qcow2 +20000M
	I0706 11:02:43.211810    3346 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:02:43.211819    3346 main.go:141] libmachine: STDERR: 
	I0706 11:02:43.211836    3346 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/disk.qcow2
	I0706 11:02:43.211842    3346 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:02:43.211875    3346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:9b:0e:95:1a:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/disk.qcow2
	I0706 11:02:43.245697    3346 main.go:141] libmachine: STDOUT: 
	I0706 11:02:43.245731    3346 main.go:141] libmachine: STDERR: 
	I0706 11:02:43.245735    3346 main.go:141] libmachine: Attempt 0
	I0706 11:02:43.245751    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:43.245814    3346 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:43.245833    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:43.245840    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:43.245846    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:43.245851    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:45.248024    3346 main.go:141] libmachine: Attempt 1
	I0706 11:02:45.248102    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:45.248464    3346 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:45.248514    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:45.248570    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:45.248604    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:45.248636    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:47.250810    3346 main.go:141] libmachine: Attempt 2
	I0706 11:02:47.250842    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:47.250953    3346 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:47.250964    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:47.250969    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:47.250984    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:47.250989    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:49.253015    3346 main.go:141] libmachine: Attempt 3
	I0706 11:02:49.253024    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:49.253067    3346 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:49.253081    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:49.253090    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:49.253095    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:49.253117    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:51.255147    3346 main.go:141] libmachine: Attempt 4
	I0706 11:02:51.255164    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:51.255338    3346 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:51.255377    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:51.255385    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:51.255390    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:51.255396    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:53.255567    3346 main.go:141] libmachine: Attempt 5
	I0706 11:02:53.255587    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:53.255656    3346 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0706 11:02:53.255665    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4e:c8:ba:21:5:d ID:1,4e:c8:ba:21:5:d Lease:0x64a85327}
	I0706 11:02:53.255671    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ba:5:ab:6d:ca:57 ID:1,ba:5:ab:6d:ca:57 Lease:0x64a85231}
	I0706 11:02:53.255677    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:d8:2a:1f:c8:40 ID:1,7a:d8:2a:1f:c8:40 Lease:0x64a700a4}
	I0706 11:02:53.255685    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:2e:9c:e5:0:5b ID:1,b2:2e:9c:e5:0:5b Lease:0x64a851e3}
	I0706 11:02:55.257784    3346 main.go:141] libmachine: Attempt 6
	I0706 11:02:55.257819    3346 main.go:141] libmachine: Searching for 2:9b:e:95:1a:ef in /var/db/dhcpd_leases ...
	I0706 11:02:55.257886    3346 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0706 11:02:55.257899    3346 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:2:9b:e:95:1a:ef ID:1,2:9b:e:95:1a:ef Lease:0x64a8534e}
	I0706 11:02:55.257903    3346 main.go:141] libmachine: Found match: 2:9b:e:95:1a:ef
	I0706 11:02:55.257911    3346 main.go:141] libmachine: IP: 192.168.105.6
	I0706 11:02:55.257917    3346 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0706 11:02:56.265616    3346 machine.go:88] provisioning docker machine ...
	I0706 11:02:56.265637    3346 buildroot.go:166] provisioning hostname "ingress-addon-legacy-946000"
	I0706 11:02:56.265697    3346 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:56.265987    3346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012f91e0] 0x1012fbc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0706 11:02:56.265996    3346 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-946000 && echo "ingress-addon-legacy-946000" | sudo tee /etc/hostname
	I0706 11:02:56.343486    3346 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-946000
	
	I0706 11:02:56.343553    3346 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:56.343792    3346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012f91e0] 0x1012fbc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0706 11:02:56.343800    3346 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-946000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-946000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-946000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 11:02:56.417556    3346 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 11:02:56.417570    3346 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15452-1247/.minikube CaCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15452-1247/.minikube}
	I0706 11:02:56.417577    3346 buildroot.go:174] setting up certificates
	I0706 11:02:56.417581    3346 provision.go:83] configureAuth start
	I0706 11:02:56.417586    3346 provision.go:138] copyHostCerts
	I0706 11:02:56.417614    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem
	I0706 11:02:56.417666    3346 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem, removing ...
	I0706 11:02:56.417672    3346 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem
	I0706 11:02:56.417796    3346 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/cert.pem (1123 bytes)
	I0706 11:02:56.417929    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem
	I0706 11:02:56.417955    3346 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem, removing ...
	I0706 11:02:56.417958    3346 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem
	I0706 11:02:56.418002    3346 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/key.pem (1675 bytes)
	I0706 11:02:56.418071    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem
	I0706 11:02:56.418096    3346 exec_runner.go:144] found /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem, removing ...
	I0706 11:02:56.418098    3346 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem
	I0706 11:02:56.418142    3346 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.pem (1078 bytes)
	I0706 11:02:56.418261    3346 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-946000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-946000]
	I0706 11:02:56.512779    3346 provision.go:172] copyRemoteCerts
	I0706 11:02:56.512818    3346 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 11:02:56.512826    3346 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/id_rsa Username:docker}
	I0706 11:02:56.550264    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0706 11:02:56.550309    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 11:02:56.557407    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0706 11:02:56.557448    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0706 11:02:56.564756    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0706 11:02:56.564792    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 11:02:56.572329    3346 provision.go:86] duration metric: configureAuth took 154.729584ms
	I0706 11:02:56.572339    3346 buildroot.go:189] setting minikube options for container-runtime
	I0706 11:02:56.572465    3346 config.go:182] Loaded profile config "ingress-addon-legacy-946000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0706 11:02:56.572509    3346 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:56.572726    3346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012f91e0] 0x1012fbc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0706 11:02:56.572731    3346 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 11:02:56.647565    3346 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 11:02:56.647572    3346 buildroot.go:70] root file system type: tmpfs
	I0706 11:02:56.647634    3346 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 11:02:56.647688    3346 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:56.647932    3346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012f91e0] 0x1012fbc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0706 11:02:56.647976    3346 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 11:02:56.723716    3346 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 11:02:56.723759    3346 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:56.723994    3346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012f91e0] 0x1012fbc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0706 11:02:56.724003    3346 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 11:02:57.090776    3346 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 11:02:57.090789    3346 machine.go:91] provisioned docker machine in 825.164209ms
	I0706 11:02:57.090803    3346 client.go:171] LocalClient.Create took 14.135477208s
	I0706 11:02:57.090817    3346 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-946000" took 14.135528667s
	I0706 11:02:57.090822    3346 start.go:300] post-start starting for "ingress-addon-legacy-946000" (driver="qemu2")
	I0706 11:02:57.090827    3346 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 11:02:57.090898    3346 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 11:02:57.090908    3346 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/id_rsa Username:docker}
	I0706 11:02:57.129030    3346 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 11:02:57.130594    3346 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 11:02:57.130602    3346 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1247/.minikube/addons for local assets ...
	I0706 11:02:57.130676    3346 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15452-1247/.minikube/files for local assets ...
	I0706 11:02:57.130795    3346 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem -> 24652.pem in /etc/ssl/certs
	I0706 11:02:57.130799    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem -> /etc/ssl/certs/24652.pem
	I0706 11:02:57.130908    3346 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 11:02:57.133457    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem --> /etc/ssl/certs/24652.pem (1708 bytes)
	I0706 11:02:57.140872    3346 start.go:303] post-start completed in 50.044917ms
	I0706 11:02:57.141266    3346 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/config.json ...
	I0706 11:02:57.141426    3346 start.go:128] duration metric: createHost completed in 14.211824792s
	I0706 11:02:57.141458    3346 main.go:141] libmachine: Using SSH client type: native
	I0706 11:02:57.141677    3346 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012f91e0] 0x1012fbc40 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0706 11:02:57.141682    3346 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 11:02:57.213941    3346 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688666577.566061626
	
	I0706 11:02:57.213949    3346 fix.go:206] guest clock: 1688666577.566061626
	I0706 11:02:57.213952    3346 fix.go:219] Guest: 2023-07-06 11:02:57.566061626 -0700 PDT Remote: 2023-07-06 11:02:57.141431 -0700 PDT m=+21.192980335 (delta=424.630626ms)
	I0706 11:02:57.213962    3346 fix.go:190] guest clock delta is within tolerance: 424.630626ms
	I0706 11:02:57.213964    3346 start.go:83] releasing machines lock for "ingress-addon-legacy-946000", held for 14.284412458s
	I0706 11:02:57.214242    3346 ssh_runner.go:195] Run: cat /version.json
	I0706 11:02:57.214250    3346 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/id_rsa Username:docker}
	I0706 11:02:57.214272    3346 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 11:02:57.214302    3346 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/id_rsa Username:docker}
	I0706 11:02:57.298144    3346 ssh_runner.go:195] Run: systemctl --version
	I0706 11:02:57.300443    3346 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 11:02:57.302494    3346 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 11:02:57.302526    3346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0706 11:02:57.305762    3346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0706 11:02:57.311156    3346 cni.go:314] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 11:02:57.311163    3346 start.go:466] detecting cgroup driver to use...
	I0706 11:02:57.311228    3346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 11:02:57.318118    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0706 11:02:57.321406    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 11:02:57.324829    3346 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 11:02:57.324860    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 11:02:57.328296    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 11:02:57.331537    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 11:02:57.334622    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 11:02:57.337645    3346 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 11:02:57.341134    3346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 11:02:57.344503    3346 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 11:02:57.347508    3346 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 11:02:57.350132    3346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:57.429334    3346 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 11:02:57.438039    3346 start.go:466] detecting cgroup driver to use...
	I0706 11:02:57.438120    3346 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 11:02:57.443172    3346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 11:02:57.448174    3346 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 11:02:57.456238    3346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 11:02:57.460899    3346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 11:02:57.465872    3346 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 11:02:57.508550    3346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 11:02:57.513827    3346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 11:02:57.519070    3346 ssh_runner.go:195] Run: which cri-dockerd
	I0706 11:02:57.520360    3346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 11:02:57.522967    3346 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 11:02:57.527708    3346 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 11:02:57.602849    3346 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 11:02:57.683680    3346 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 11:02:57.683693    3346 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 11:02:57.688911    3346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:02:57.775808    3346 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 11:02:58.935674    3346 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159854959s)
	I0706 11:02:58.935771    3346 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 11:02:58.953882    3346 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 11:02:58.975745    3346 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	I0706 11:02:58.975844    3346 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0706 11:02:58.977431    3346 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 11:02:58.981388    3346 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0706 11:02:58.981439    3346 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 11:02:58.986850    3346 docker.go:636] Got preloaded images: 
	I0706 11:02:58.986867    3346 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0706 11:02:58.986907    3346 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 11:02:58.989781    3346 ssh_runner.go:195] Run: which lz4
	I0706 11:02:58.991165    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0706 11:02:58.991271    3346 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0706 11:02:58.992578    3346 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0706 11:02:58.992593    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0706 11:03:00.688451    3346 docker.go:600] Took 1.697236 seconds to copy over tarball
	I0706 11:03:00.688509    3346 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0706 11:03:01.987892    3346 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.2993705s)
	I0706 11:03:01.987904    3346 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0706 11:03:02.006961    3346 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 11:03:02.010001    3346 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0706 11:03:02.015396    3346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 11:03:02.090127    3346 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 11:03:03.367734    3346 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.277595042s)
	I0706 11:03:03.367859    3346 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 11:03:03.373803    3346 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0706 11:03:03.373810    3346 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0706 11:03:03.373814    3346 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0706 11:03:03.384544    3346 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0706 11:03:03.386327    3346 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:03:03.386483    3346 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0706 11:03:03.386621    3346 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0706 11:03:03.386730    3346 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0706 11:03:03.386734    3346 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0706 11:03:03.387363    3346 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0706 11:03:03.387436    3346 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0706 11:03:03.395784    3346 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0706 11:03:03.396020    3346 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0706 11:03:03.396981    3346 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:03:03.397071    3346 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0706 11:03:03.397118    3346 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0706 11:03:03.397158    3346 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0706 11:03:03.397146    3346 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0706 11:03:03.397312    3346 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0706 11:03:04.427314    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0706 11:03:04.434193    3346 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0706 11:03:04.434220    3346 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0706 11:03:04.434265    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0706 11:03:04.440138    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0706 11:03:04.651002    3346 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:04.651124    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0706 11:03:04.657719    3346 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0706 11:03:04.657746    3346 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0706 11:03:04.657801    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0706 11:03:04.663972    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0706 11:03:04.778684    3346 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:04.778776    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:03:04.785187    3346 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0706 11:03:04.785210    3346 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:03:04.785265    3346 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:03:04.796176    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0706 11:03:04.831778    3346 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:04.831875    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0706 11:03:04.837585    3346 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0706 11:03:04.837605    3346 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0706 11:03:04.837641    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0706 11:03:04.843597    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0706 11:03:04.886833    3346 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:04.886952    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0706 11:03:04.893326    3346 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0706 11:03:04.893351    3346 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0706 11:03:04.893398    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0706 11:03:04.899456    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0706 11:03:05.051110    3346 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:05.051212    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0706 11:03:05.058335    3346 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0706 11:03:05.058362    3346 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0706 11:03:05.058405    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0706 11:03:05.064245    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0706 11:03:05.267452    3346 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:05.267944    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0706 11:03:05.287155    3346 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0706 11:03:05.287203    3346 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0706 11:03:05.287295    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0706 11:03:05.299983    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0706 11:03:05.430798    3346 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0706 11:03:05.431520    3346 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0706 11:03:05.454132    3346 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0706 11:03:05.454199    3346 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0706 11:03:05.454320    3346 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0706 11:03:05.469198    3346 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0706 11:03:05.469256    3346 cache_images.go:92] LoadImages completed in 2.095439834s
	W0706 11:03:05.469344    3346 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0706 11:03:05.469448    3346 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 11:03:05.485711    3346 cni.go:84] Creating CNI manager for ""
	I0706 11:03:05.485731    3346 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 11:03:05.485762    3346 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 11:03:05.485777    3346 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-946000 NodeName:ingress-addon-legacy-946000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0706 11:03:05.485915    3346 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-946000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 11:03:05.485993    3346 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-946000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-946000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 11:03:05.486082    3346 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0706 11:03:05.491491    3346 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 11:03:05.491547    3346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0706 11:03:05.496286    3346 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0706 11:03:05.503811    3346 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0706 11:03:05.510207    3346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0706 11:03:05.516564    3346 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0706 11:03:05.517949    3346 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 11:03:05.521853    3346 certs.go:56] Setting up /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000 for IP: 192.168.105.6
	I0706 11:03:05.521866    3346 certs.go:190] acquiring lock for shared ca certs: {Name:mk763e62c6a9326245ca88f64c15681d0696aa38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.522005    3346 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key
	I0706 11:03:05.522045    3346 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key
	I0706 11:03:05.522074    3346 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.key
	I0706 11:03:05.522081    3346 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt with IP's: []
	I0706 11:03:05.605962    3346 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt ...
	I0706 11:03:05.605967    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: {Name:mk2dfd199e830d8461241ea6d90f2bef4e18df95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.606205    3346 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.key ...
	I0706 11:03:05.606209    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.key: {Name:mk00c3b0237b0839a1ade5a7e66363035fdee7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.606327    3346 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key.b354f644
	I0706 11:03:05.606335    3346 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0706 11:03:05.759072    3346 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt.b354f644 ...
	I0706 11:03:05.759077    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt.b354f644: {Name:mk2975350595ec583cb95bd665035a4c89b80da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.759266    3346 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key.b354f644 ...
	I0706 11:03:05.759269    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key.b354f644: {Name:mkeb9aeb36c52d0ffe20ae34a3575ce91bad4040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.759384    3346 certs.go:337] copying /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt
	I0706 11:03:05.759608    3346 certs.go:341] copying /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key
	I0706 11:03:05.759742    3346 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.key
	I0706 11:03:05.759751    3346 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.crt with IP's: []
	I0706 11:03:05.837787    3346 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.crt ...
	I0706 11:03:05.837792    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.crt: {Name:mke983a3229a018142016ae986d9a179070d842d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.837956    3346 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.key ...
	I0706 11:03:05.837962    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.key: {Name:mkdd2aaf672f07ae904f278cef144089dcb6d397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:05.838079    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0706 11:03:05.838097    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0706 11:03:05.838108    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0706 11:03:05.838120    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0706 11:03:05.838132    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0706 11:03:05.838152    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0706 11:03:05.838163    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0706 11:03:05.838174    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0706 11:03:05.838270    3346 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465.pem (1338 bytes)
	W0706 11:03:05.838304    3346 certs.go:433] ignoring /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465_empty.pem, impossibly tiny 0 bytes
	I0706 11:03:05.838314    3346 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca-key.pem (1675 bytes)
	I0706 11:03:05.838336    3346 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem (1078 bytes)
	I0706 11:03:05.838359    3346 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem (1123 bytes)
	I0706 11:03:05.838379    3346 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/certs/key.pem (1675 bytes)
	I0706 11:03:05.838452    3346 certs.go:437] found cert: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem (1708 bytes)
	I0706 11:03:05.838473    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:03:05.838483    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465.pem -> /usr/share/ca-certificates/2465.pem
	I0706 11:03:05.838494    3346 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem -> /usr/share/ca-certificates/24652.pem
	I0706 11:03:05.838885    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0706 11:03:05.846537    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0706 11:03:05.853386    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0706 11:03:05.860789    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0706 11:03:05.867718    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 11:03:05.874432    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0706 11:03:05.881407    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 11:03:05.888539    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0706 11:03:05.895337    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 11:03:05.901934    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/2465.pem --> /usr/share/ca-certificates/2465.pem (1338 bytes)
	I0706 11:03:05.908978    3346 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/ssl/certs/24652.pem --> /usr/share/ca-certificates/24652.pem (1708 bytes)
	I0706 11:03:05.915927    3346 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0706 11:03:05.920889    3346 ssh_runner.go:195] Run: openssl version
	I0706 11:03:05.922871    3346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 11:03:05.925900    3346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:03:05.927300    3346 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:03:05.927323    3346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 11:03:05.929038    3346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 11:03:05.932276    3346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2465.pem && ln -fs /usr/share/ca-certificates/2465.pem /etc/ssl/certs/2465.pem"
	I0706 11:03:05.935048    3346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2465.pem
	I0706 11:03:05.936397    3346 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 17:57 /usr/share/ca-certificates/2465.pem
	I0706 11:03:05.936416    3346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2465.pem
	I0706 11:03:05.938128    3346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2465.pem /etc/ssl/certs/51391683.0"
	I0706 11:03:05.941323    3346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24652.pem && ln -fs /usr/share/ca-certificates/24652.pem /etc/ssl/certs/24652.pem"
	I0706 11:03:05.944691    3346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24652.pem
	I0706 11:03:05.946179    3346 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 17:57 /usr/share/ca-certificates/24652.pem
	I0706 11:03:05.946201    3346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24652.pem
	I0706 11:03:05.947987    3346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24652.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 11:03:05.950892    3346 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 11:03:05.952127    3346 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 11:03:05.952151    3346 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-946000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-946000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:03:05.952221    3346 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 11:03:05.957678    3346 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0706 11:03:05.961135    3346 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 11:03:05.964277    3346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 11:03:05.967036    3346 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 11:03:05.967049    3346 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0706 11:03:05.992934    3346 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0706 11:03:05.992980    3346 kubeadm.go:322] [preflight] Running pre-flight checks
	I0706 11:03:06.087525    3346 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0706 11:03:06.087578    3346 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0706 11:03:06.087630    3346 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0706 11:03:06.137796    3346 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 11:03:06.138546    3346 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 11:03:06.138607    3346 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0706 11:03:06.226872    3346 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0706 11:03:06.230044    3346 out.go:204]   - Generating certificates and keys ...
	I0706 11:03:06.230078    3346 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0706 11:03:06.230110    3346 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0706 11:03:06.299455    3346 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0706 11:03:06.544742    3346 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0706 11:03:06.720764    3346 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0706 11:03:06.947261    3346 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0706 11:03:07.017937    3346 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0706 11:03:07.018013    3346 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-946000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0706 11:03:07.112958    3346 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0706 11:03:07.113046    3346 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-946000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0706 11:03:07.428484    3346 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0706 11:03:07.498457    3346 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0706 11:03:07.638268    3346 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0706 11:03:07.638299    3346 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0706 11:03:07.684193    3346 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0706 11:03:07.746079    3346 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0706 11:03:07.808700    3346 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0706 11:03:08.028667    3346 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0706 11:03:08.028840    3346 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0706 11:03:08.032182    3346 out.go:204]   - Booting up control plane ...
	I0706 11:03:08.032241    3346 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0706 11:03:08.033077    3346 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0706 11:03:08.033460    3346 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0706 11:03:08.033795    3346 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0706 11:03:08.035132    3346 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0706 11:03:19.036587    3346 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.001303 seconds
	I0706 11:03:19.036663    3346 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0706 11:03:19.042409    3346 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0706 11:03:19.566649    3346 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0706 11:03:19.566845    3346 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-946000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0706 11:03:20.071302    3346 kubeadm.go:322] [bootstrap-token] Using token: ucvrgg.77z9xq9klxy4n76w
	I0706 11:03:20.074581    3346 out.go:204]   - Configuring RBAC rules ...
	I0706 11:03:20.074673    3346 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0706 11:03:20.074750    3346 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0706 11:03:20.078593    3346 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0706 11:03:20.079655    3346 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0706 11:03:20.080883    3346 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0706 11:03:20.081792    3346 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0706 11:03:20.085225    3346 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0706 11:03:20.269753    3346 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0706 11:03:20.484741    3346 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0706 11:03:20.485343    3346 kubeadm.go:322] 
	I0706 11:03:20.485390    3346 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0706 11:03:20.485394    3346 kubeadm.go:322] 
	I0706 11:03:20.485456    3346 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0706 11:03:20.485462    3346 kubeadm.go:322] 
	I0706 11:03:20.485489    3346 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0706 11:03:20.485535    3346 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0706 11:03:20.485582    3346 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0706 11:03:20.485587    3346 kubeadm.go:322] 
	I0706 11:03:20.485629    3346 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0706 11:03:20.485711    3346 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0706 11:03:20.485785    3346 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0706 11:03:20.485814    3346 kubeadm.go:322] 
	I0706 11:03:20.485873    3346 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0706 11:03:20.485947    3346 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0706 11:03:20.485954    3346 kubeadm.go:322] 
	I0706 11:03:20.486025    3346 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ucvrgg.77z9xq9klxy4n76w \
	I0706 11:03:20.486147    3346 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:54887cb817b031a56c6be5acb24737812f5477ec9674aeae1af9b05ae3868136 \
	I0706 11:03:20.486164    3346 kubeadm.go:322]     --control-plane 
	I0706 11:03:20.486167    3346 kubeadm.go:322] 
	I0706 11:03:20.486233    3346 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0706 11:03:20.486238    3346 kubeadm.go:322] 
	I0706 11:03:20.486304    3346 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ucvrgg.77z9xq9klxy4n76w \
	I0706 11:03:20.486376    3346 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:54887cb817b031a56c6be5acb24737812f5477ec9674aeae1af9b05ae3868136 
	I0706 11:03:20.486559    3346 kubeadm.go:322] W0706 18:03:06.344963    1415 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0706 11:03:20.486703    3346 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0706 11:03:20.486795    3346 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0706 11:03:20.486886    3346 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 11:03:20.486978    3346 kubeadm.go:322] W0706 18:03:08.385245    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0706 11:03:20.487081    3346 kubeadm.go:322] W0706 18:03:08.385688    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0706 11:03:20.487093    3346 cni.go:84] Creating CNI manager for ""
	I0706 11:03:20.487103    3346 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 11:03:20.487115    3346 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 11:03:20.487191    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b minikube.k8s.io/name=ingress-addon-legacy-946000 minikube.k8s.io/updated_at=2023_07_06T11_03_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:20.487195    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:20.491320    3346 ops.go:34] apiserver oom_adj: -16
	I0706 11:03:20.566635    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:21.103481    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:21.603490    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:22.103435    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:22.603469    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:23.103358    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:23.603356    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:24.103513    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:24.603477    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:25.103412    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:25.603345    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:26.103096    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:26.603542    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:27.103415    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:27.603375    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:28.103376    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:28.603361    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:29.103115    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:29.603369    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:30.103391    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:30.602298    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:31.103326    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:31.603381    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:32.103398    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:32.603378    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:33.103369    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:33.603260    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:34.103365    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:34.603301    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:35.103140    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:35.603393    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:36.103053    3346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 11:03:36.139900    3346 kubeadm.go:1081] duration metric: took 15.652827875s to wait for elevateKubeSystemPrivileges.
	I0706 11:03:36.139914    3346 kubeadm.go:406] StartCluster complete in 30.1878625s
	I0706 11:03:36.139923    3346 settings.go:142] acquiring lock: {Name:mk352fa14b583fbace5fdd55e6f9ba4f39f48007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:36.140003    3346 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:03:36.140424    3346 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/kubeconfig: {Name:mk34623cbdb1646c9229359a97354a4ad80828c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:03:36.140646    3346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 11:03:36.140692    3346 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0706 11:03:36.140728    3346 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-946000"
	I0706 11:03:36.140736    3346 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-946000"
	I0706 11:03:36.140749    3346 config.go:182] Loaded profile config "ingress-addon-legacy-946000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0706 11:03:36.140758    3346 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-946000"
	I0706 11:03:36.140799    3346 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-946000"
	I0706 11:03:36.140762    3346 host.go:66] Checking if "ingress-addon-legacy-946000" exists ...
	I0706 11:03:36.140883    3346 kapi.go:59] client config for ingress-addon-legacy-946000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.key", CAFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102355d90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 11:03:36.141308    3346 cert_rotation.go:137] Starting client certificate rotation controller
	I0706 11:03:36.141769    3346 kapi.go:59] client config for ingress-addon-legacy-946000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.key", CAFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102355d90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 11:03:36.144910    3346 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:03:36.147041    3346 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-946000"
	I0706 11:03:36.148967    3346 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0706 11:03:36.148973    3346 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0706 11:03:36.148979    3346 host.go:66] Checking if "ingress-addon-legacy-946000" exists ...
	I0706 11:03:36.148982    3346 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/id_rsa Username:docker}
	I0706 11:03:36.149698    3346 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0706 11:03:36.149703    3346 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0706 11:03:36.149707    3346 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/ingress-addon-legacy-946000/id_rsa Username:docker}
	I0706 11:03:36.186505    3346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0706 11:03:36.193521    3346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0706 11:03:36.207207    3346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0706 11:03:36.459625    3346 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0706 11:03:36.495389    3346 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0706 11:03:36.503271    3346 addons.go:499] enable addons completed in 362.579541ms: enabled=[storage-provisioner default-storageclass]
	I0706 11:03:36.651596    3346 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-946000" context rescaled to 1 replicas
	I0706 11:03:36.651617    3346 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:03:36.655777    3346 out.go:177] * Verifying Kubernetes components...
	I0706 11:03:36.659744    3346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 11:03:36.686498    3346 kapi.go:59] client config for ingress-addon-legacy-946000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.key", CAFile:"/Users/jenkins/minikube-integration/15452-1247/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102355d90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 11:03:36.686643    3346 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-946000" to be "Ready" ...
	I0706 11:03:36.688557    3346 node_ready.go:49] node "ingress-addon-legacy-946000" has status "Ready":"True"
	I0706 11:03:36.688563    3346 node_ready.go:38] duration metric: took 1.909208ms waiting for node "ingress-addon-legacy-946000" to be "Ready" ...
	I0706 11:03:36.688567    3346 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 11:03:36.692008    3346 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-b9kgw" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.699583    3346 pod_ready.go:92] pod "coredns-66bff467f8-b9kgw" in "kube-system" namespace has status "Ready":"True"
	I0706 11:03:38.699592    3346 pod_ready.go:81] duration metric: took 2.007581083s waiting for pod "coredns-66bff467f8-b9kgw" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.699596    3346 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.701464    3346 pod_ready.go:92] pod "etcd-ingress-addon-legacy-946000" in "kube-system" namespace has status "Ready":"True"
	I0706 11:03:38.701468    3346 pod_ready.go:81] duration metric: took 1.869167ms waiting for pod "etcd-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.701471    3346 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.703489    3346 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-946000" in "kube-system" namespace has status "Ready":"True"
	I0706 11:03:38.703494    3346 pod_ready.go:81] duration metric: took 2.012292ms waiting for pod "kube-apiserver-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.703498    3346 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.705570    3346 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-946000" in "kube-system" namespace has status "Ready":"True"
	I0706 11:03:38.705575    3346 pod_ready.go:81] duration metric: took 2.074958ms waiting for pod "kube-controller-manager-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.705579    3346 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tprr2" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.707467    3346 pod_ready.go:92] pod "kube-proxy-tprr2" in "kube-system" namespace has status "Ready":"True"
	I0706 11:03:38.707472    3346 pod_ready.go:81] duration metric: took 1.890792ms waiting for pod "kube-proxy-tprr2" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.707479    3346 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:38.898619    3346 request.go:628] Waited for 191.081833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-946000
	I0706 11:03:39.096834    3346 request.go:628] Waited for 196.106125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-946000
	I0706 11:03:39.098954    3346 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-946000" in "kube-system" namespace has status "Ready":"True"
	I0706 11:03:39.098963    3346 pod_ready.go:81] duration metric: took 391.476291ms waiting for pod "kube-scheduler-ingress-addon-legacy-946000" in "kube-system" namespace to be "Ready" ...
	I0706 11:03:39.098970    3346 pod_ready.go:38] duration metric: took 2.410405291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 11:03:39.098988    3346 api_server.go:52] waiting for apiserver process to appear ...
	I0706 11:03:39.099076    3346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 11:03:39.106392    3346 api_server.go:72] duration metric: took 2.454768375s to wait for apiserver process to appear ...
	I0706 11:03:39.106406    3346 api_server.go:88] waiting for apiserver healthz status ...
	I0706 11:03:39.106415    3346 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0706 11:03:39.111654    3346 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0706 11:03:39.112215    3346 api_server.go:141] control plane version: v1.18.20
	I0706 11:03:39.112225    3346 api_server.go:131] duration metric: took 5.814959ms to wait for apiserver health ...
	I0706 11:03:39.112229    3346 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 11:03:39.298643    3346 request.go:628] Waited for 186.346916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0706 11:03:39.310927    3346 system_pods.go:59] 7 kube-system pods found
	I0706 11:03:39.310971    3346 system_pods.go:61] "coredns-66bff467f8-b9kgw" [957289d6-31a8-41f6-ae2c-2fc3b30c75eb] Running
	I0706 11:03:39.310980    3346 system_pods.go:61] "etcd-ingress-addon-legacy-946000" [ed90bf40-93ee-4499-84e6-1911077b1726] Running
	I0706 11:03:39.310988    3346 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-946000" [a4bcd51a-f0e5-4bb0-bc6c-e09f83f4d5ca] Running
	I0706 11:03:39.310997    3346 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-946000" [f527fba1-9594-4c04-adff-3f8d6d21f49a] Running
	I0706 11:03:39.311003    3346 system_pods.go:61] "kube-proxy-tprr2" [d1e1fb99-355d-481c-8fe1-72f7c35cc3d4] Running
	I0706 11:03:39.311011    3346 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-946000" [3fe7f49d-33a5-4545-bcc9-b6fe9d7968f5] Running
	I0706 11:03:39.311019    3346 system_pods.go:61] "storage-provisioner" [0d9567ab-7135-4b1d-9e44-b3e044f96d73] Running
	I0706 11:03:39.311031    3346 system_pods.go:74] duration metric: took 198.792042ms to wait for pod list to return data ...
	I0706 11:03:39.311043    3346 default_sa.go:34] waiting for default service account to be created ...
	I0706 11:03:39.498636    3346 request.go:628] Waited for 187.481458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0706 11:03:39.504364    3346 default_sa.go:45] found service account: "default"
	I0706 11:03:39.504398    3346 default_sa.go:55] duration metric: took 193.344584ms for default service account to be created ...
	I0706 11:03:39.504420    3346 system_pods.go:116] waiting for k8s-apps to be running ...
	I0706 11:03:39.698608    3346 request.go:628] Waited for 194.098792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0706 11:03:39.718911    3346 system_pods.go:86] 7 kube-system pods found
	I0706 11:03:39.718951    3346 system_pods.go:89] "coredns-66bff467f8-b9kgw" [957289d6-31a8-41f6-ae2c-2fc3b30c75eb] Running
	I0706 11:03:39.718965    3346 system_pods.go:89] "etcd-ingress-addon-legacy-946000" [ed90bf40-93ee-4499-84e6-1911077b1726] Running
	I0706 11:03:39.718974    3346 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-946000" [a4bcd51a-f0e5-4bb0-bc6c-e09f83f4d5ca] Running
	I0706 11:03:39.718983    3346 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-946000" [f527fba1-9594-4c04-adff-3f8d6d21f49a] Running
	I0706 11:03:39.718991    3346 system_pods.go:89] "kube-proxy-tprr2" [d1e1fb99-355d-481c-8fe1-72f7c35cc3d4] Running
	I0706 11:03:39.719001    3346 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-946000" [3fe7f49d-33a5-4545-bcc9-b6fe9d7968f5] Running
	I0706 11:03:39.719011    3346 system_pods.go:89] "storage-provisioner" [0d9567ab-7135-4b1d-9e44-b3e044f96d73] Running
	I0706 11:03:39.719024    3346 system_pods.go:126] duration metric: took 214.59125ms to wait for k8s-apps to be running ...
	I0706 11:03:39.719036    3346 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 11:03:39.719156    3346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 11:03:39.735350    3346 system_svc.go:56] duration metric: took 16.304458ms WaitForService to wait for kubelet.
	I0706 11:03:39.735376    3346 kubeadm.go:581] duration metric: took 3.08375275s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 11:03:39.735400    3346 node_conditions.go:102] verifying NodePressure condition ...
	I0706 11:03:39.898664    3346 request.go:628] Waited for 163.16225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0706 11:03:39.908321    3346 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0706 11:03:39.908389    3346 node_conditions.go:123] node cpu capacity is 2
	I0706 11:03:39.908422    3346 node_conditions.go:105] duration metric: took 173.013375ms to run NodePressure ...
	I0706 11:03:39.908448    3346 start.go:228] waiting for startup goroutines ...
	I0706 11:03:39.908465    3346 start.go:233] waiting for cluster config update ...
	I0706 11:03:39.908494    3346 start.go:242] writing updated cluster config ...
	I0706 11:03:39.909766    3346 ssh_runner.go:195] Run: rm -f paused
	I0706 11:03:39.973441    3346 start.go:642] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0706 11:03:39.976764    3346 out.go:177] 
	W0706 11:03:39.980759    3346 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0706 11:03:39.984626    3346 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0706 11:03:39.992590    3346 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-946000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 18:02:54 UTC, ends at Thu 2023-07-06 18:04:44 UTC. --
	Jul 06 18:04:17 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:17.581914632Z" level=info msg="shim disconnected" id=4e7f2f918fae6ac4b9c7d5e1b34ce2b0e8f660d988103879e889f59419587d2d namespace=moby
	Jul 06 18:04:17 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:17.581988802Z" level=warning msg="cleaning up after shim disconnected" id=4e7f2f918fae6ac4b9c7d5e1b34ce2b0e8f660d988103879e889f59419587d2d namespace=moby
	Jul 06 18:04:17 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:17.581993760Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.074064836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.074116462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.074132129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.074144296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.092070888Z" level=info msg="shim disconnected" id=f7abdf39d4774dd283db585ff0d076797e7e0b0b8fb579447a29075d26e42191 namespace=moby
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1070]: time="2023-07-06T18:04:31.092161265Z" level=info msg="ignoring event" container=f7abdf39d4774dd283db585ff0d076797e7e0b0b8fb579447a29075d26e42191 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.092334853Z" level=warning msg="cleaning up after shim disconnected" id=f7abdf39d4774dd283db585ff0d076797e7e0b0b8fb579447a29075d26e42191 namespace=moby
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.092360729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1070]: time="2023-07-06T18:04:31.118560535Z" level=info msg="ignoring event" container=3bd56345bc3ae9a5b6a9e6e0f4bfbf2c13e42ff1c01d2badcc27b4fd8e91f993 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.118649538Z" level=info msg="shim disconnected" id=3bd56345bc3ae9a5b6a9e6e0f4bfbf2c13e42ff1c01d2badcc27b4fd8e91f993 namespace=moby
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.118677455Z" level=warning msg="cleaning up after shim disconnected" id=3bd56345bc3ae9a5b6a9e6e0f4bfbf2c13e42ff1c01d2badcc27b4fd8e91f993 namespace=moby
	Jul 06 18:04:31 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:31.118681872Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1070]: time="2023-07-06T18:04:39.498516591Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=0d7758e5f05562405a01b6a6412f5d99997e6a0e89e8eb0885f38a4e55d51657
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1070]: time="2023-07-06T18:04:39.508134622Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=0d7758e5f05562405a01b6a6412f5d99997e6a0e89e8eb0885f38a4e55d51657
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1070]: time="2023-07-06T18:04:39.610172929Z" level=info msg="ignoring event" container=0d7758e5f05562405a01b6a6412f5d99997e6a0e89e8eb0885f38a4e55d51657 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:39.610511436Z" level=info msg="shim disconnected" id=0d7758e5f05562405a01b6a6412f5d99997e6a0e89e8eb0885f38a4e55d51657 namespace=moby
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:39.611158616Z" level=warning msg="cleaning up after shim disconnected" id=0d7758e5f05562405a01b6a6412f5d99997e6a0e89e8eb0885f38a4e55d51657 namespace=moby
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:39.611169658Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1070]: time="2023-07-06T18:04:39.650278629Z" level=info msg="ignoring event" container=a91c409e289e1034847e84b488b7abc853efea3379fbeafd217679e94df8d6f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:39.650606886Z" level=info msg="shim disconnected" id=a91c409e289e1034847e84b488b7abc853efea3379fbeafd217679e94df8d6f5 namespace=moby
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:39.650661554Z" level=warning msg="cleaning up after shim disconnected" id=a91c409e289e1034847e84b488b7abc853efea3379fbeafd217679e94df8d6f5 namespace=moby
	Jul 06 18:04:39 ingress-addon-legacy-946000 dockerd[1077]: time="2023-07-06T18:04:39.650667137Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	3bd56345bc3ae       13753a81eccfd                                                                                                      13 seconds ago       Exited              hello-world-app           2                   c374adf2e2ed3
	75abb4862885e       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                                      36 seconds ago       Running             nginx                     0                   ae817f419e806
	0d7758e5f0556       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   50 seconds ago       Exited              controller                0                   a91c409e289e1
	70fb2637fc36a       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   560f1cfbc58ff
	40fc4ce62dd92       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   74cb0293eb136
	3079e7a09c70f       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   44ab796457d88
	079dba66c8143       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   6494073d676fb
	6ea0ed74beff8       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   4f1aeaceb3035
	60440a7da69a9       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   ccd0d37550765
	a89337ee1fe0a       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   c1c8368f62ef6
	f62af0393295b       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   625cd3461c34e
	19f4ada310d81       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   4b1aabca6f4a3
	
	* 
	* ==> coredns [079dba66c814] <==
	* [INFO] 172.17.0.1:56588 - 1007 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028668s
	[INFO] 172.17.0.1:56588 - 55420 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026793s
	[INFO] 172.17.0.1:56588 - 61948 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026501s
	[INFO] 172.17.0.1:56588 - 42961 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037418s
	[INFO] 172.17.0.1:63036 - 4551 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050294s
	[INFO] 172.17.0.1:63036 - 51802 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016376s
	[INFO] 172.17.0.1:63036 - 62647 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000019043s
	[INFO] 172.17.0.1:63036 - 13126 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010375s
	[INFO] 172.17.0.1:63036 - 51107 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026085s
	[INFO] 172.17.0.1:63036 - 19618 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009917s
	[INFO] 172.17.0.1:63036 - 45871 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013251s
	[INFO] 172.17.0.1:59571 - 49999 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000018126s
	[INFO] 172.17.0.1:59571 - 55640 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014751s
	[INFO] 172.17.0.1:59571 - 31607 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016335s
	[INFO] 172.17.0.1:59571 - 63309 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012334s
	[INFO] 172.17.0.1:59571 - 60510 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009042s
	[INFO] 172.17.0.1:59571 - 3013 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023667s
	[INFO] 172.17.0.1:59571 - 15814 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012876s
	[INFO] 172.17.0.1:53089 - 63353 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000021043s
	[INFO] 172.17.0.1:53089 - 46569 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003371s
	[INFO] 172.17.0.1:53089 - 24204 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012959s
	[INFO] 172.17.0.1:53089 - 13974 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000110047s
	[INFO] 172.17.0.1:53089 - 38388 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000017293s
	[INFO] 172.17.0.1:53089 - 63582 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013667s
	[INFO] 172.17.0.1:53089 - 51906 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013584s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-946000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-946000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b
	                    minikube.k8s.io/name=ingress-addon-legacy-946000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T11_03_20_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 18:03:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-946000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 18:04:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 18:04:27 +0000   Thu, 06 Jul 2023 18:03:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 18:04:27 +0000   Thu, 06 Jul 2023 18:03:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 18:04:27 +0000   Thu, 06 Jul 2023 18:03:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 18:04:27 +0000   Thu, 06 Jul 2023 18:03:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-946000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac6ff17620304f67a23f630da3518231
	  System UUID:                ac6ff17620304f67a23f630da3518231
	  Boot ID:                    e036e417-f042-4ef9-8e3e-df49e4fae2db
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-6hbw6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 coredns-66bff467f8-b9kgw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     68s
	  kube-system                 etcd-ingress-addon-legacy-946000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-ingress-addon-legacy-946000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-946000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-tprr2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-ingress-addon-legacy-946000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s   kubelet     Node ingress-addon-legacy-946000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet     Node ingress-addon-legacy-946000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet     Node ingress-addon-legacy-946000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                77s   kubelet     Node ingress-addon-legacy-946000 status is now: NodeReady
	  Normal  Starting                 67s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul 6 18:02] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.662572] EINJ: EINJ table not found.
	[  +0.525682] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044878] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000804] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.156576] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.079687] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.469906] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[  +0.173735] systemd-fstab-generator[747]: Ignoring "noauto" for root device
	[  +0.079427] systemd-fstab-generator[758]: Ignoring "noauto" for root device
	[  +0.092222] systemd-fstab-generator[771]: Ignoring "noauto" for root device
	[  +1.147585] kauditd_printk_skb: 17 callbacks suppressed
	[Jul 6 18:03] systemd-fstab-generator[1063]: Ignoring "noauto" for root device
	[  +4.129893] systemd-fstab-generator[1530]: Ignoring "noauto" for root device
	[  +8.308970] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.086288] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.555244] systemd-fstab-generator[2605]: Ignoring "noauto" for root device
	[ +16.694704] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.341158] kauditd_printk_skb: 15 callbacks suppressed
	[  +1.562048] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jul 6 18:04] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [a89337ee1fe0] <==
	* raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/07/06 18:03:16 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-07-06 18:03:16.190484 W | auth: simple token is not cryptographically signed
	2023-07-06 18:03:16.191206 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-06 18:03:16.192953 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-06 18:03:16.193030 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-06 18:03:16.193112 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-06 18:03:16.193168 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-07-06 18:03:16.193336 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/07/06 18:03:16 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/07/06 18:03:16 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-07-06 18:03:16.692615 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-06 18:03:16.694408 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-06 18:03:16.694515 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-06 18:03:16.694732 I | etcdserver: published {Name:ingress-addon-legacy-946000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-07-06 18:03:16.695070 I | embed: ready to serve client requests
	2023-07-06 18:03:16.698461 I | embed: serving client requests on 192.168.105.6:2379
	2023-07-06 18:03:16.698614 I | embed: ready to serve client requests
	2023-07-06 18:03:16.701151 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  18:04:44 up 1 min,  0 users,  load average: 0.60, 0.23, 0.09
	Linux ingress-addon-legacy-946000 5.10.57 #1 SMP PREEMPT Fri Jun 30 18:49:58 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [60440a7da69a] <==
	* I0706 18:03:18.153598       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	E0706 18:03:18.169810       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0706 18:03:18.250303       1 cache.go:39] Caches are synced for autoregister controller
	I0706 18:03:18.251166       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0706 18:03:18.251204       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0706 18:03:18.251630       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0706 18:03:18.254102       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0706 18:03:19.153384       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0706 18:03:19.153860       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 18:03:19.172913       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0706 18:03:19.181552       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0706 18:03:19.181686       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0706 18:03:19.313313       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 18:03:19.323330       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0706 18:03:19.416596       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0706 18:03:19.416901       1 controller.go:609] quota admission added evaluator for: endpoints
	I0706 18:03:19.418241       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0706 18:03:20.449382       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0706 18:03:20.616968       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0706 18:03:20.829838       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0706 18:03:26.973148       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0706 18:03:36.456990       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0706 18:03:36.615617       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0706 18:03:40.365810       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0706 18:04:04.718575       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [19f4ada310d8] <==
	* I0706 18:03:36.503539       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-946000", UID:"be536869-c028-4ca6-8327-4b9041bd9b19", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-946000 event: Registered Node ingress-addon-legacy-946000 in Controller
	I0706 18:03:36.528820       1 shared_informer.go:230] Caches are synced for disruption 
	I0706 18:03:36.528830       1 disruption.go:339] Sending events to api server.
	I0706 18:03:36.599775       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0706 18:03:36.614469       1 shared_informer.go:230] Caches are synced for deployment 
	I0706 18:03:36.616855       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9661037a-0eff-4cab-afe6-ed9a7820dd9f", APIVersion:"apps/v1", ResourceVersion:"308", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0706 18:03:36.619598       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"eb80c15e-e3fc-49ee-88cf-c1531e3b2898", APIVersion:"apps/v1", ResourceVersion:"332", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-b9kgw
	I0706 18:03:36.646806       1 shared_informer.go:230] Caches are synced for attach detach 
	I0706 18:03:36.651243       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0706 18:03:36.756130       1 shared_informer.go:230] Caches are synced for resource quota 
	I0706 18:03:36.757983       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0706 18:03:36.757997       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0706 18:03:36.772951       1 shared_informer.go:230] Caches are synced for resource quota 
	I0706 18:03:37.104163       1 request.go:621] Throttling request took 1.043947882s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0706 18:03:37.555783       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0706 18:03:37.555847       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0706 18:03:40.361436       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ace48bd9-69fe-413f-92de-12f5a9b10a93", APIVersion:"apps/v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0706 18:03:40.373693       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0938f944-0937-4149-872c-c9fe7cda44f9", APIVersion:"batch/v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-fv74h
	I0706 18:03:40.373764       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"ee3c0f0d-c024-47b0-bf69-d058879cfb84", APIVersion:"apps/v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-j6wxq
	I0706 18:03:40.409815       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"25c1b4c2-98fd-4460-a792-1e1aa584ba2a", APIVersion:"batch/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-d24fk
	I0706 18:03:43.155982       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"25c1b4c2-98fd-4460-a792-1e1aa584ba2a", APIVersion:"batch/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0706 18:03:43.162392       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0938f944-0937-4149-872c-c9fe7cda44f9", APIVersion:"batch/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0706 18:04:14.988194       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"65bf939b-4891-4c7b-925e-75cfaaee1273", APIVersion:"apps/v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0706 18:04:14.996117       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"7eb7219e-8f46-4429-974c-b682b68ddfd3", APIVersion:"apps/v1", ResourceVersion:"545", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-6hbw6
	E0706 18:04:42.240922       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-6ckn4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [6ea0ed74beff] <==
	* W0706 18:03:37.003180       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0706 18:03:37.014179       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0706 18:03:37.014194       1 server_others.go:186] Using iptables Proxier.
	I0706 18:03:37.014311       1 server.go:583] Version: v1.18.20
	I0706 18:03:37.016043       1 config.go:133] Starting endpoints config controller
	I0706 18:03:37.016056       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0706 18:03:37.016071       1 config.go:315] Starting service config controller
	I0706 18:03:37.016073       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0706 18:03:37.117878       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0706 18:03:37.117877       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f62af0393295] <==
	* W0706 18:03:18.175413       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0706 18:03:18.175427       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0706 18:03:18.191244       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0706 18:03:18.191319       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0706 18:03:18.192248       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0706 18:03:18.192354       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 18:03:18.192385       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 18:03:18.192437       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0706 18:03:18.198162       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0706 18:03:18.198213       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0706 18:03:18.198277       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 18:03:18.198361       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0706 18:03:18.198394       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0706 18:03:18.198421       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0706 18:03:18.199983       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0706 18:03:18.200005       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0706 18:03:18.200027       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0706 18:03:18.200045       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0706 18:03:18.200063       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0706 18:03:18.200152       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0706 18:03:19.099300       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0706 18:03:19.199677       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0706 18:03:19.202799       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 18:03:19.281344       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0706 18:03:21.392605       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 18:02:54 UTC, ends at Thu 2023-07-06 18:04:44 UTC. --
	Jul 06 18:04:26 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:26.007064    2611 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ef6a0717403194ce4b24b9b393ce2cab0c9ca3f79c2743c2f212b6040e1f9641
	Jul 06 18:04:26 ingress-addon-legacy-946000 kubelet[2611]: E0706 18:04:26.009591    2611 pod_workers.go:191] Error syncing pod 222dec71-9d55-48b5-8e16-caaabe3a5d6d ("kube-ingress-dns-minikube_kube-system(222dec71-9d55-48b5-8e16-caaabe3a5d6d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(222dec71-9d55-48b5-8e16-caaabe3a5d6d)"
	Jul 06 18:04:30 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:30.333510    2611 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-8whnr" (UniqueName: "kubernetes.io/secret/222dec71-9d55-48b5-8e16-caaabe3a5d6d-minikube-ingress-dns-token-8whnr") pod "222dec71-9d55-48b5-8e16-caaabe3a5d6d" (UID: "222dec71-9d55-48b5-8e16-caaabe3a5d6d")
	Jul 06 18:04:30 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:30.337148    2611 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222dec71-9d55-48b5-8e16-caaabe3a5d6d-minikube-ingress-dns-token-8whnr" (OuterVolumeSpecName: "minikube-ingress-dns-token-8whnr") pod "222dec71-9d55-48b5-8e16-caaabe3a5d6d" (UID: "222dec71-9d55-48b5-8e16-caaabe3a5d6d"). InnerVolumeSpecName "minikube-ingress-dns-token-8whnr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 06 18:04:30 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:30.433726    2611 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-8whnr" (UniqueName: "kubernetes.io/secret/222dec71-9d55-48b5-8e16-caaabe3a5d6d-minikube-ingress-dns-token-8whnr") on node "ingress-addon-legacy-946000" DevicePath ""
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:31.007470    2611 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4e7f2f918fae6ac4b9c7d5e1b34ce2b0e8f660d988103879e889f59419587d2d
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: W0706 18:04:31.131351    2611 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podbbf3374c-dbe2-40eb-8ac7-3cadc9e5346f/3bd56345bc3ae9a5b6a9e6e0f4bfbf2c13e42ff1c01d2badcc27b4fd8e91f993": none of the resources are being tracked.
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:31.735223    2611 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ef6a0717403194ce4b24b9b393ce2cab0c9ca3f79c2743c2f212b6040e1f9641
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: W0706 18:04:31.741832    2611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-6hbw6 through plugin: invalid network status for
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:31.749641    2611 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3bd56345bc3ae9a5b6a9e6e0f4bfbf2c13e42ff1c01d2badcc27b4fd8e91f993
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: E0706 18:04:31.750028    2611 pod_workers.go:191] Error syncing pod bbf3374c-dbe2-40eb-8ac7-3cadc9e5346f ("hello-world-app-5f5d8b66bb-6hbw6_default(bbf3374c-dbe2-40eb-8ac7-3cadc9e5346f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-6hbw6_default(bbf3374c-dbe2-40eb-8ac7-3cadc9e5346f)"
	Jul 06 18:04:31 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:31.754407    2611 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4e7f2f918fae6ac4b9c7d5e1b34ce2b0e8f660d988103879e889f59419587d2d
	Jul 06 18:04:32 ingress-addon-legacy-946000 kubelet[2611]: W0706 18:04:32.766985    2611 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-6hbw6 through plugin: invalid network status for
	Jul 06 18:04:37 ingress-addon-legacy-946000 kubelet[2611]: E0706 18:04:37.479169    2611 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-j6wxq.176f59656f1b4381", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-j6wxq", UID:"d9210e10-eafc-48af-b03d-b0ab7a7ebf62", APIVersion:"v1", ResourceVersion:"423", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-946000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc121de6d5c7b7181, ext:76888246647, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc121de6d5c7b7181, ext:76888246647, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-j6wxq.176f59656f1b4381" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 06 18:04:37 ingress-addon-legacy-946000 kubelet[2611]: E0706 18:04:37.492251    2611 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-j6wxq.176f59656f1b4381", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-j6wxq", UID:"d9210e10-eafc-48af-b03d-b0ab7a7ebf62", APIVersion:"v1", ResourceVersion:"423", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-946000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc121de6d5c7b7181, ext:76888246647, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc121de6d5cf8f4e6, ext:76896472284, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-j6wxq.176f59656f1b4381" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 06 18:04:39 ingress-addon-legacy-946000 kubelet[2611]: W0706 18:04:39.892642    2611 pod_container_deletor.go:77] Container "a91c409e289e1034847e84b488b7abc853efea3379fbeafd217679e94df8d6f5" not found in pod's containers
	Jul 06 18:04:41 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:41.661278    2611 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d9210e10-eafc-48af-b03d-b0ab7a7ebf62-webhook-cert") pod "d9210e10-eafc-48af-b03d-b0ab7a7ebf62" (UID: "d9210e10-eafc-48af-b03d-b0ab7a7ebf62")
	Jul 06 18:04:41 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:41.661401    2611 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-pz6t7" (UniqueName: "kubernetes.io/secret/d9210e10-eafc-48af-b03d-b0ab7a7ebf62-ingress-nginx-token-pz6t7") pod "d9210e10-eafc-48af-b03d-b0ab7a7ebf62" (UID: "d9210e10-eafc-48af-b03d-b0ab7a7ebf62")
	Jul 06 18:04:41 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:41.669566    2611 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9210e10-eafc-48af-b03d-b0ab7a7ebf62-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d9210e10-eafc-48af-b03d-b0ab7a7ebf62" (UID: "d9210e10-eafc-48af-b03d-b0ab7a7ebf62"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 06 18:04:41 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:41.675120    2611 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9210e10-eafc-48af-b03d-b0ab7a7ebf62-ingress-nginx-token-pz6t7" (OuterVolumeSpecName: "ingress-nginx-token-pz6t7") pod "d9210e10-eafc-48af-b03d-b0ab7a7ebf62" (UID: "d9210e10-eafc-48af-b03d-b0ab7a7ebf62"). InnerVolumeSpecName "ingress-nginx-token-pz6t7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 06 18:04:41 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:41.762975    2611 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d9210e10-eafc-48af-b03d-b0ab7a7ebf62-webhook-cert") on node "ingress-addon-legacy-946000" DevicePath ""
	Jul 06 18:04:41 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:41.763072    2611 reconciler.go:319] Volume detached for volume "ingress-nginx-token-pz6t7" (UniqueName: "kubernetes.io/secret/d9210e10-eafc-48af-b03d-b0ab7a7ebf62-ingress-nginx-token-pz6t7") on node "ingress-addon-legacy-946000" DevicePath ""
	Jul 06 18:04:43 ingress-addon-legacy-946000 kubelet[2611]: W0706 18:04:43.031074    2611 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d9210e10-eafc-48af-b03d-b0ab7a7ebf62/volumes" does not exist
	Jul 06 18:04:44 ingress-addon-legacy-946000 kubelet[2611]: I0706 18:04:44.005861    2611 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3bd56345bc3ae9a5b6a9e6e0f4bfbf2c13e42ff1c01d2badcc27b4fd8e91f993
	Jul 06 18:04:44 ingress-addon-legacy-946000 kubelet[2611]: E0706 18:04:44.009298    2611 pod_workers.go:191] Error syncing pod bbf3374c-dbe2-40eb-8ac7-3cadc9e5346f ("hello-world-app-5f5d8b66bb-6hbw6_default(bbf3374c-dbe2-40eb-8ac7-3cadc9e5346f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-6hbw6_default(bbf3374c-dbe2-40eb-8ac7-3cadc9e5346f)"
	
	* 
	* ==> storage-provisioner [3079e7a09c70] <==
	* I0706 18:03:38.905337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0706 18:03:38.909347       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0706 18:03:38.909371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0706 18:03:38.912091       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0706 18:03:38.912198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-946000_0cd81918-4a90-407d-94d2-d776e5c6cba3!
	I0706 18:03:38.912112       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b229e57-96ca-4216-ac78-37a8b8ffa37d", APIVersion:"v1", ResourceVersion:"375", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-946000_0cd81918-4a90-407d-94d2-d776e5c6cba3 became leader
	I0706 18:03:39.013209       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-946000_0cd81918-4a90-407d-94d2-d776e5c6cba3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-946000 -n ingress-addon-legacy-946000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-946000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.28s)

                                                
                                    
x
+
TestJSONOutput/start/Command (17.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-883000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-883000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 90 (17.624786833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4fba92b8-821f-4a62-a142-52c3713348d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-883000] minikube v1.30.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"158715cd-dbbe-4480-b08d-fe6bcfb24435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"b5fdd68b-6d49-45c7-9aa4-6e60a2aafaa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig"}}
	{"specversion":"1.0","id":"9ffdd02e-6d5b-4baa-8204-eb21999d9604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8ebc7d5c-c82c-4372-b9af-99bbd1b22d31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74a8dfe1-b01c-4c01-8776-8e3799bb1eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube"}}
	{"specversion":"1.0","id":"d0c254ed-6fac-4807-a0b6-4ba3a6c42d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d14cdb9c-aa13-45b1-94d4-a5d253993efc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"06692858-89a4-436e-9982-c9f1335180dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"54ca7249-d0ea-434f-be52-6d77bf4325cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-883000 in cluster json-output-883000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2db61949-7bf0-44fa-b917-70b6d89ddb4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"bafee61e-8b34-469e-9b3a-3595fe6b19b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"90","issues":"","message":"Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1\nstdout:\n\nstderr:\nJob failed. See \"journalctl -xe\" for details.","name":"RUNTIME_ENABLE","url":""}}
	{"specversion":"1.0","id":"458a3167-568b-47a2-b696-b5d573bc3c3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-883000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 90
--- FAIL: TestJSONOutput/start/Command (17.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-883000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-883000 --output=json --user=testUser: exit status 80 (1.805738208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ddcbe4b-797a-47f7-a16c-abd4db79ce33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-883000 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"297ac514-2ea1-4d68-9e4d-a2eea0e524b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1\nstdout:\n\nstderr:\nFailed to disable unit: Unit file kubelet.service does not exist.","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"7d6604ef-3675-4883-9ca8-b9b9aab87024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                        │\n│    If the above advice does not help, please let us know:                                                              │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                            │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │\n│    Please also attach the following file to the GitHub issue:                                                          │\n│    - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │\n│                                                                                                                        │\n╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-883000 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.81s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-883000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-883000 --output=json --user=testUser: exit status 80 (1.46321325s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b92b8bee-cdbb-4e4c-bd69-ae9f13852ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-883000 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"46517570-92aa-4673-b393-5269e2019666","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5\nstdout:\n\nstderr:\nFailed to start kubelet.service: Unit kubelet.service not found.","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"20fb4a5f-ad63-4f17-96ed-202f139d4f64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                          │\n│    If the above advice does not help, please let us know:                                                                │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                              │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │\n│    Please also attach the following file to the GitHub issue:                                                            │\n│    - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log    │\n│                                                                                                                          │\n╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-883000 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-412000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-412000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.40947225s)

                                                
                                                
-- stdout --
	* [mount-start-1-412000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-412000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-412000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-412000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-412000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-412000 -n mount-start-1-412000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-412000 -n mount-start-1-412000: exit status 7 (69.755666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-412000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.773306083s)

                                                
                                                
-- stdout --
	* [multinode-312000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-312000 in cluster multinode-312000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-312000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:08:16.923201    3700 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:08:16.923352    3700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:08:16.923355    3700 out.go:309] Setting ErrFile to fd 2...
	I0706 11:08:16.923357    3700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:08:16.923428    3700 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:08:16.924451    3700 out.go:303] Setting JSON to false
	I0706 11:08:16.939937    3700 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2268,"bootTime":1688664628,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:08:16.940004    3700 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:08:16.944457    3700 out.go:177] * [multinode-312000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:08:16.952433    3700 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:08:16.956426    3700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:08:16.952486    3700 notify.go:220] Checking for updates...
	I0706 11:08:16.959477    3700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:08:16.962449    3700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:08:16.965429    3700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:08:16.968425    3700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:08:16.971552    3700 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:08:16.975384    3700 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:08:16.982417    3700 start.go:297] selected driver: qemu2
	I0706 11:08:16.982425    3700 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:08:16.982431    3700 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:08:16.984405    3700 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:08:16.987414    3700 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:08:16.990443    3700 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:08:16.990461    3700 cni.go:84] Creating CNI manager for ""
	I0706 11:08:16.990465    3700 cni.go:137] 0 nodes found, recommending kindnet
	I0706 11:08:16.990471    3700 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0706 11:08:16.990477    3700 start_flags.go:319] config:
	{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0}
	I0706 11:08:16.994507    3700 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:08:16.997441    3700 out.go:177] * Starting control plane node multinode-312000 in cluster multinode-312000
	I0706 11:08:17.005454    3700 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:08:17.005480    3700 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:08:17.005499    3700 cache.go:57] Caching tarball of preloaded images
	I0706 11:08:17.005563    3700 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:08:17.005575    3700 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:08:17.005771    3700 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/multinode-312000/config.json ...
	I0706 11:08:17.005784    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/multinode-312000/config.json: {Name:mkddf9c03367441dbcc10f46a8932400874ae404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:08:17.005984    3700 start.go:365] acquiring machines lock for multinode-312000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:08:17.006011    3700 start.go:369] acquired machines lock for "multinode-312000" in 21.791µs
	I0706 11:08:17.006021    3700 start.go:93] Provisioning new machine with config: &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:08:17.006058    3700 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:08:17.014451    3700 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:08:17.029574    3700 start.go:159] libmachine.API.Create for "multinode-312000" (driver="qemu2")
	I0706 11:08:17.029600    3700 client.go:168] LocalClient.Create starting
	I0706 11:08:17.029653    3700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:08:17.029678    3700 main.go:141] libmachine: Decoding PEM data...
	I0706 11:08:17.029692    3700 main.go:141] libmachine: Parsing certificate...
	I0706 11:08:17.029741    3700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:08:17.029755    3700 main.go:141] libmachine: Decoding PEM data...
	I0706 11:08:17.029769    3700 main.go:141] libmachine: Parsing certificate...
	I0706 11:08:17.030088    3700 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:08:17.162046    3700 main.go:141] libmachine: Creating SSH key...
	I0706 11:08:17.217973    3700 main.go:141] libmachine: Creating Disk image...
	I0706 11:08:17.217978    3700 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:08:17.218127    3700 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:08:17.226557    3700 main.go:141] libmachine: STDOUT: 
	I0706 11:08:17.226571    3700 main.go:141] libmachine: STDERR: 
	I0706 11:08:17.226626    3700 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2 +20000M
	I0706 11:08:17.233748    3700 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:08:17.233765    3700 main.go:141] libmachine: STDERR: 
	I0706 11:08:17.233780    3700 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:08:17.233784    3700 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:08:17.233817    3700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:c3:2b:b1:c6:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:08:17.235360    3700 main.go:141] libmachine: STDOUT: 
	I0706 11:08:17.235374    3700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:08:17.235393    3700 client.go:171] LocalClient.Create took 205.788042ms
	I0706 11:08:19.237570    3700 start.go:128] duration metric: createHost completed in 2.231502125s
	I0706 11:08:19.237639    3700 start.go:83] releasing machines lock for "multinode-312000", held for 2.231625833s
	W0706 11:08:19.237703    3700 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:08:19.246332    3700 out.go:177] * Deleting "multinode-312000" in qemu2 ...
	W0706 11:08:19.265684    3700 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:08:19.265724    3700 start.go:687] Will try again in 5 seconds ...
	I0706 11:08:24.267879    3700 start.go:365] acquiring machines lock for multinode-312000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:08:24.268314    3700 start.go:369] acquired machines lock for "multinode-312000" in 361.333µs
	I0706 11:08:24.268413    3700 start.go:93] Provisioning new machine with config: &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:08:24.268738    3700 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:08:24.279655    3700 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:08:24.325755    3700 start.go:159] libmachine.API.Create for "multinode-312000" (driver="qemu2")
	I0706 11:08:24.325821    3700 client.go:168] LocalClient.Create starting
	I0706 11:08:24.325932    3700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:08:24.325975    3700 main.go:141] libmachine: Decoding PEM data...
	I0706 11:08:24.325994    3700 main.go:141] libmachine: Parsing certificate...
	I0706 11:08:24.326076    3700 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:08:24.326103    3700 main.go:141] libmachine: Decoding PEM data...
	I0706 11:08:24.326120    3700 main.go:141] libmachine: Parsing certificate...
	I0706 11:08:24.326647    3700 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:08:24.512875    3700 main.go:141] libmachine: Creating SSH key...
	I0706 11:08:24.612379    3700 main.go:141] libmachine: Creating Disk image...
	I0706 11:08:24.612385    3700 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:08:24.612521    3700 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:08:24.620983    3700 main.go:141] libmachine: STDOUT: 
	I0706 11:08:24.620997    3700 main.go:141] libmachine: STDERR: 
	I0706 11:08:24.621056    3700 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2 +20000M
	I0706 11:08:24.628188    3700 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:08:24.628212    3700 main.go:141] libmachine: STDERR: 
	I0706 11:08:24.628225    3700 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:08:24.628230    3700 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:08:24.628283    3700 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:90:89:a7:37:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:08:24.629810    3700 main.go:141] libmachine: STDOUT: 
	I0706 11:08:24.629822    3700 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:08:24.629835    3700 client.go:171] LocalClient.Create took 304.00825ms
	I0706 11:08:26.632088    3700 start.go:128] duration metric: createHost completed in 2.36332275s
	I0706 11:08:26.632174    3700 start.go:83] releasing machines lock for "multinode-312000", held for 2.363841625s
	W0706 11:08:26.632575    3700 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:08:26.641354    3700 out.go:177] 
	W0706 11:08:26.646317    3700 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:08:26.646366    3700 out.go:239] * 
	* 
	W0706 11:08:26.649036    3700 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:08:26.656259    3700 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-312000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (68.431041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (116.750417ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-312000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- rollout status deployment/busybox: exit status 1 (54.586584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.109875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.456709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.801333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.89375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0706 11:08:35.685751    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.630167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.638084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.798875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0706 11:08:55.568130    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:55.574336    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:55.586411    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:55.608497    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:55.650593    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:55.732664    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:55.894779    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:56.216896    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:08:56.859098    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.52775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0706 11:08:58.141686    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:09:00.704085    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:09:05.826431    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.96125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0706 11:09:16.068779    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
E0706 11:09:36.551234    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.883458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0706 11:10:17.513622    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.884333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.388459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.067208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.460791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.561667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.3475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.17275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.557542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-312000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-312000 -v 3 --alsologtostderr: exit status 89 (39.978584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:26.336574    3795 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:26.336777    3795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.336779    3795 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:26.336782    3795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.336856    3795 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:26.337089    3795 mustload.go:65] Loading cluster: multinode-312000
	I0706 11:10:26.337280    3795 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:26.341658    3795 out.go:177] * The control plane node must be running for this command
	I0706 11:10:26.345699    3795 out.go:177]   To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-312000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.240834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-312000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-312000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-312000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.3\",\"ClusterName\":\"multinode-312000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (33.120709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --output json --alsologtostderr: exit status 7 (28.605042ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-312000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:26.571706    3805 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:26.571870    3805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.571873    3805 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:26.571875    3805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.571957    3805 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:26.572091    3805 out.go:303] Setting JSON to true
	I0706 11:10:26.572100    3805 mustload.go:65] Loading cluster: multinode-312000
	I0706 11:10:26.572164    3805 notify.go:220] Checking for updates...
	I0706 11:10:26.572265    3805 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:26.572270    3805 status.go:255] checking status of multinode-312000 ...
	I0706 11:10:26.572457    3805 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0706 11:10:26.572461    3805 status.go:343] host is not running, skipping remaining checks
	I0706 11:10:26.572463    3805 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-312000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.500791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 node stop m03: exit status 85 (45.508083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-312000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status: exit status 7 (28.385625ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr: exit status 7 (28.398084ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:26.703458    3813 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:26.703588    3813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.703591    3813 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:26.703593    3813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.703663    3813 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:26.703783    3813 out.go:303] Setting JSON to false
	I0706 11:10:26.703795    3813 mustload.go:65] Loading cluster: multinode-312000
	I0706 11:10:26.703871    3813 notify.go:220] Checking for updates...
	I0706 11:10:26.703968    3813 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:26.703973    3813 status.go:255] checking status of multinode-312000 ...
	I0706 11:10:26.704157    3813 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0706 11:10:26.704163    3813 status.go:343] host is not running, skipping remaining checks
	I0706 11:10:26.704165    3813 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr": multinode-312000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.190167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 node start m03 --alsologtostderr: exit status 85 (45.473834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:26.760816    3817 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:26.761028    3817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.761031    3817 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:26.761034    3817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.761119    3817 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:26.761343    3817 mustload.go:65] Loading cluster: multinode-312000
	I0706 11:10:26.761506    3817 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:26.766125    3817 out.go:177] 
	W0706 11:10:26.769119    3817 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0706 11:10:26.769123    3817 out.go:239] * 
	* 
	W0706 11:10:26.770728    3817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:10:26.773980    3817 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0706 11:10:26.760816    3817 out.go:296] Setting OutFile to fd 1 ...
I0706 11:10:26.761028    3817 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:10:26.761031    3817 out.go:309] Setting ErrFile to fd 2...
I0706 11:10:26.761034    3817 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:10:26.761119    3817 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
I0706 11:10:26.761343    3817 mustload.go:65] Loading cluster: multinode-312000
I0706 11:10:26.761506    3817 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:10:26.766125    3817 out.go:177] 
W0706 11:10:26.769119    3817 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0706 11:10:26.769123    3817 out.go:239] * 
* 
W0706 11:10:26.770728    3817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0706 11:10:26.773980    3817 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-312000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status: exit status 7 (28.337334ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-312000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.523625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-312000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-312000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.1775365s)

                                                
                                                
-- stdout --
	* [multinode-312000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-312000 in cluster multinode-312000
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:26.951011    3827 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:26.951136    3827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.951138    3827 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:26.951141    3827 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:26.951217    3827 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:26.952150    3827 out.go:303] Setting JSON to false
	I0706 11:10:26.967397    3827 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2398,"bootTime":1688664628,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:10:26.967462    3827 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:10:26.972156    3827 out.go:177] * [multinode-312000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:10:26.978072    3827 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:10:26.982101    3827 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:10:26.978137    3827 notify.go:220] Checking for updates...
	I0706 11:10:26.985077    3827 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:10:26.988062    3827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:10:26.991081    3827 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:10:26.994042    3827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:10:26.997281    3827 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:26.997344    3827 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:10:27.002091    3827 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:10:27.009067    3827 start.go:297] selected driver: qemu2
	I0706 11:10:27.009075    3827 start.go:944] validating driver "qemu2" against &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:10:27.009152    3827 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:10:27.011046    3827 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:10:27.011070    3827 cni.go:84] Creating CNI manager for ""
	I0706 11:10:27.011074    3827 cni.go:137] 1 nodes found, recommending kindnet
	I0706 11:10:27.011079    3827 start_flags.go:319] config:
	{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:10:27.014853    3827 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:27.022045    3827 out.go:177] * Starting control plane node multinode-312000 in cluster multinode-312000
	I0706 11:10:27.026060    3827 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:10:27.026083    3827 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:10:27.026097    3827 cache.go:57] Caching tarball of preloaded images
	I0706 11:10:27.026149    3827 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:10:27.026154    3827 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:10:27.026205    3827 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/multinode-312000/config.json ...
	I0706 11:10:27.026513    3827 start.go:365] acquiring machines lock for multinode-312000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:10:27.026540    3827 start.go:369] acquired machines lock for "multinode-312000" in 22.5µs
	I0706 11:10:27.026550    3827 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:10:27.026554    3827 fix.go:54] fixHost starting: 
	I0706 11:10:27.026669    3827 fix.go:102] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0706 11:10:27.026676    3827 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:10:27.035037    3827 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0706 11:10:27.039102    3827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:90:89:a7:37:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:10:27.040909    3827 main.go:141] libmachine: STDOUT: 
	I0706 11:10:27.040925    3827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:10:27.040951    3827 fix.go:56] fixHost completed within 14.397416ms
	I0706 11:10:27.040956    3827 start.go:83] releasing machines lock for "multinode-312000", held for 14.412208ms
	W0706 11:10:27.040963    3827 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:10:27.040995    3827 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:10:27.041000    3827 start.go:687] Will try again in 5 seconds ...
	I0706 11:10:32.043057    3827 start.go:365] acquiring machines lock for multinode-312000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:10:32.043398    3827 start.go:369] acquired machines lock for "multinode-312000" in 277.875µs
	I0706 11:10:32.043516    3827 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:10:32.043534    3827 fix.go:54] fixHost starting: 
	I0706 11:10:32.044250    3827 fix.go:102] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0706 11:10:32.044275    3827 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:10:32.051664    3827 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0706 11:10:32.055695    3827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:90:89:a7:37:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:10:32.063791    3827 main.go:141] libmachine: STDOUT: 
	I0706 11:10:32.063860    3827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:10:32.063926    3827 fix.go:56] fixHost completed within 20.388583ms
	I0706 11:10:32.063949    3827 start.go:83] releasing machines lock for "multinode-312000", held for 20.527666ms
	W0706 11:10:32.064146    3827 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:10:32.072534    3827 out.go:177] 
	W0706 11:10:32.076626    3827 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:10:32.076661    3827 out.go:239] * 
	* 
	W0706 11:10:32.078715    3827 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:10:32.088579    3827 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-312000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-312000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (32.311708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 node delete m03: exit status 89 (37.616333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-312000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr: exit status 7 (28.388292ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:32.265119    3841 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:32.265253    3841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:32.265255    3841 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:32.265258    3841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:32.265325    3841 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:32.265444    3841 out.go:303] Setting JSON to false
	I0706 11:10:32.265453    3841 mustload.go:65] Loading cluster: multinode-312000
	I0706 11:10:32.265505    3841 notify.go:220] Checking for updates...
	I0706 11:10:32.265618    3841 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:32.265623    3841 status.go:255] checking status of multinode-312000 ...
	I0706 11:10:32.265806    3841 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0706 11:10:32.265809    3841 status.go:343] host is not running, skipping remaining checks
	I0706 11:10:32.265811    3841 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.291667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status: exit status 7 (28.969583ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr: exit status 7 (28.340667ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:32.409503    3849 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:32.409661    3849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:32.409664    3849 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:32.409666    3849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:32.409736    3849 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:32.409847    3849 out.go:303] Setting JSON to false
	I0706 11:10:32.409856    3849 mustload.go:65] Loading cluster: multinode-312000
	I0706 11:10:32.409920    3849 notify.go:220] Checking for updates...
	I0706 11:10:32.410029    3849 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:32.410035    3849 status.go:255] checking status of multinode-312000 ...
	I0706 11:10:32.410224    3849 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0706 11:10:32.410229    3849 status.go:343] host is not running, skipping remaining checks
	I0706 11:10:32.410231    3849 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr": multinode-312000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr": multinode-312000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.1005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176744291s)

                                                
                                                
-- stdout --
	* [multinode-312000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-312000 in cluster multinode-312000
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:32.465901    3853 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:32.466011    3853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:32.466014    3853 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:32.466016    3853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:32.466081    3853 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:32.467026    3853 out.go:303] Setting JSON to false
	I0706 11:10:32.482208    3853 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2404,"bootTime":1688664628,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:10:32.482275    3853 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:10:32.486622    3853 out.go:177] * [multinode-312000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:10:32.493773    3853 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:10:32.493794    3853 notify.go:220] Checking for updates...
	I0706 11:10:32.499716    3853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:10:32.502791    3853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:10:32.504194    3853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:10:32.507721    3853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:10:32.510734    3853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:10:32.514037    3853 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:32.514308    3853 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:10:32.518699    3853 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:10:32.525738    3853 start.go:297] selected driver: qemu2
	I0706 11:10:32.525744    3853 start.go:944] validating driver "qemu2" against &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:10:32.525799    3853 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:10:32.527703    3853 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:10:32.527729    3853 cni.go:84] Creating CNI manager for ""
	I0706 11:10:32.527733    3853 cni.go:137] 1 nodes found, recommending kindnet
	I0706 11:10:32.527738    3853 start_flags.go:319] config:
	{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-312000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:10:32.531630    3853 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:32.538734    3853 out.go:177] * Starting control plane node multinode-312000 in cluster multinode-312000
	I0706 11:10:32.542723    3853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:10:32.542740    3853 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:10:32.542753    3853 cache.go:57] Caching tarball of preloaded images
	I0706 11:10:32.542807    3853 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:10:32.542812    3853 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:10:32.542878    3853 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/multinode-312000/config.json ...
	I0706 11:10:32.543269    3853 start.go:365] acquiring machines lock for multinode-312000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:10:32.543295    3853 start.go:369] acquired machines lock for "multinode-312000" in 20.333µs
	I0706 11:10:32.543305    3853 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:10:32.543310    3853 fix.go:54] fixHost starting: 
	I0706 11:10:32.543437    3853 fix.go:102] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0706 11:10:32.543447    3853 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:10:32.550715    3853 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0706 11:10:32.554777    3853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:90:89:a7:37:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:10:32.556710    3853 main.go:141] libmachine: STDOUT: 
	I0706 11:10:32.556730    3853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:10:32.556760    3853 fix.go:56] fixHost completed within 13.450667ms
	I0706 11:10:32.556766    3853 start.go:83] releasing machines lock for "multinode-312000", held for 13.466833ms
	W0706 11:10:32.556774    3853 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:10:32.556818    3853 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:10:32.556823    3853 start.go:687] Will try again in 5 seconds ...
	I0706 11:10:37.558955    3853 start.go:365] acquiring machines lock for multinode-312000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:10:37.559470    3853 start.go:369] acquired machines lock for "multinode-312000" in 439.916µs
	I0706 11:10:37.559674    3853 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:10:37.559692    3853 fix.go:54] fixHost starting: 
	I0706 11:10:37.560504    3853 fix.go:102] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0706 11:10:37.560532    3853 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:10:37.567869    3853 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0706 11:10:37.572051    3853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:90:89:a7:37:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/multinode-312000/disk.qcow2
	I0706 11:10:37.581440    3853 main.go:141] libmachine: STDOUT: 
	I0706 11:10:37.581503    3853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:10:37.581604    3853 fix.go:56] fixHost completed within 21.909959ms
	I0706 11:10:37.581625    3853 start.go:83] releasing machines lock for "multinode-312000", held for 22.1355ms
	W0706 11:10:37.581933    3853 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:10:37.589891    3853 out.go:177] 
	W0706 11:10:37.593900    3853 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:10:37.593929    3853 out.go:239] * 
	* 
	W0706 11:10:37.596594    3853 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:10:37.604841    3853 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (69.816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-312000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000-m01 --driver=qemu2 : exit status 80 (9.873328084s)

                                                
                                                
-- stdout --
	* [multinode-312000-m01] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-312000-m01 in cluster multinode-312000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-312000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000-m02 --driver=qemu2 
E0706 11:10:51.818914    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000-m02 --driver=qemu2 : exit status 80 (9.852269708s)

                                                
                                                
-- stdout --
	* [multinode-312000-m02] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-312000-m02 in cluster multinode-312000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-312000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-312000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-312000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-312000: exit status 89 (79.37325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-312000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (30.7155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.01s)

                                                
                                    
x
+
TestPreload (10.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-343000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-343000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.017000875s)

                                                
                                                
-- stdout --
	* [test-preload-343000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-343000 in cluster test-preload-343000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-343000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:10:57.843798    3912 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:10:57.843932    3912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:57.843934    3912 out.go:309] Setting ErrFile to fd 2...
	I0706 11:10:57.843936    3912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:10:57.844005    3912 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:10:57.845023    3912 out.go:303] Setting JSON to false
	I0706 11:10:57.860559    3912 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2429,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:10:57.860633    3912 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:10:57.865647    3912 out.go:177] * [test-preload-343000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:10:57.873527    3912 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:10:57.876493    3912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:10:57.873615    3912 notify.go:220] Checking for updates...
	I0706 11:10:57.882504    3912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:10:57.883917    3912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:10:57.886488    3912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:10:57.889514    3912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:10:57.892815    3912 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:10:57.892857    3912 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:10:57.897468    3912 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:10:57.904477    3912 start.go:297] selected driver: qemu2
	I0706 11:10:57.904482    3912 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:10:57.904486    3912 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:10:57.906448    3912 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:10:57.909525    3912 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:10:57.912633    3912 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:10:57.912660    3912 cni.go:84] Creating CNI manager for ""
	I0706 11:10:57.912667    3912 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:10:57.912671    3912 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:10:57.912678    3912 start_flags.go:319] config:
	{Name:test-preload-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0}
	I0706 11:10:57.916940    3912 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.924470    3912 out.go:177] * Starting control plane node test-preload-343000 in cluster test-preload-343000
	I0706 11:10:57.928507    3912 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0706 11:10:57.928614    3912 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/test-preload-343000/config.json ...
	I0706 11:10:57.928618    3912 cache.go:107] acquiring lock: {Name:mk48c0f65fbbf68799957091e8e1b480d9c76ae0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928617    3912 cache.go:107] acquiring lock: {Name:mk9d364a518b83d94ecaecccd1d583b9d0070980 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928615    3912 cache.go:107] acquiring lock: {Name:mke32f2365c6a76b179b139bffb8dbe1b535eb28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928633    3912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/test-preload-343000/config.json: {Name:mke1fd3e90becf2d9f40550d2bb8a158a4c8d639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:10:57.928641    3912 cache.go:107] acquiring lock: {Name:mkc71a0a8d3c705f78c3eb45c80b5fce8240584a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928645    3912 cache.go:107] acquiring lock: {Name:mk9873a4e3ac395e9114399af75bdbf4ab6ab529 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928661    3912 cache.go:107] acquiring lock: {Name:mkcb1f9ade27691fd4f880593cc313d8643ff5d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928651    3912 cache.go:107] acquiring lock: {Name:mkb210d8030f141ae11e57acbd4ab15480d1f9af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928824    3912 cache.go:107] acquiring lock: {Name:mk13ea5a42c7a1de8228a79f28dd4d66cb7fe62a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:10:57.928835    3912 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0706 11:10:57.928839    3912 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0706 11:10:57.928888    3912 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0706 11:10:57.928897    3912 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0706 11:10:57.929014    3912 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:10:57.929026    3912 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0706 11:10:57.929070    3912 start.go:365] acquiring machines lock for test-preload-343000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:10:57.929082    3912 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0706 11:10:57.929103    3912 start.go:369] acquired machines lock for "test-preload-343000" in 27µs
	I0706 11:10:57.929115    3912 start.go:93] Provisioning new machine with config: &{Name:test-preload-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:10:57.929172    3912 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:10:57.936337    3912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:10:57.929266    3912 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0706 11:10:57.939368    3912 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0706 11:10:57.939942    3912 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0706 11:10:57.940101    3912 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0706 11:10:57.940125    3912 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 11:10:57.940160    3912 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0706 11:10:57.940203    3912 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0706 11:10:57.943535    3912 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0706 11:10:57.943707    3912 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0706 11:10:57.952566    3912 start.go:159] libmachine.API.Create for "test-preload-343000" (driver="qemu2")
	I0706 11:10:57.952596    3912 client.go:168] LocalClient.Create starting
	I0706 11:10:57.952659    3912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:10:57.952679    3912 main.go:141] libmachine: Decoding PEM data...
	I0706 11:10:57.952690    3912 main.go:141] libmachine: Parsing certificate...
	I0706 11:10:57.952732    3912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:10:57.952747    3912 main.go:141] libmachine: Decoding PEM data...
	I0706 11:10:57.952754    3912 main.go:141] libmachine: Parsing certificate...
	I0706 11:10:57.953051    3912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:10:58.112195    3912 main.go:141] libmachine: Creating SSH key...
	I0706 11:10:58.260577    3912 main.go:141] libmachine: Creating Disk image...
	I0706 11:10:58.260588    3912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:10:58.260772    3912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2
	I0706 11:10:58.269812    3912 main.go:141] libmachine: STDOUT: 
	I0706 11:10:58.269834    3912 main.go:141] libmachine: STDERR: 
	I0706 11:10:58.269898    3912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2 +20000M
	I0706 11:10:58.277865    3912 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:10:58.277882    3912 main.go:141] libmachine: STDERR: 
	I0706 11:10:58.277894    3912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2
	I0706 11:10:58.277899    3912 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:10:58.277947    3912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:83:b2:9e:46:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2
	I0706 11:10:58.279571    3912 main.go:141] libmachine: STDOUT: 
	I0706 11:10:58.279589    3912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:10:58.279612    3912 client.go:171] LocalClient.Create took 327.013834ms
	I0706 11:10:59.313867    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0706 11:10:59.437699    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0706 11:10:59.447354    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0706 11:10:59.549511    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0706 11:10:59.549528    3912 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.620871792s
	I0706 11:10:59.549548    3912 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0706 11:10:59.600594    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0706 11:10:59.651622    3912 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0706 11:10:59.651659    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0706 11:10:59.790196    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0706 11:10:59.886659    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0706 11:10:59.886671    3912 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.958065209s
	I0706 11:10:59.886678    3912 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0706 11:11:00.010370    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0706 11:11:00.190053    3912 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0706 11:11:00.190138    3912 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0706 11:11:00.279757    3912 start.go:128] duration metric: createHost completed in 2.350576541s
	I0706 11:11:00.279798    3912 start.go:83] releasing machines lock for "test-preload-343000", held for 2.3506945s
	W0706 11:11:00.279876    3912 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:00.290838    3912 out.go:177] * Deleting "test-preload-343000" in qemu2 ...
	W0706 11:11:00.310221    3912 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:00.310253    3912 start.go:687] Will try again in 5 seconds ...
	I0706 11:11:01.296499    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0706 11:11:01.296551    3912 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.367948041s
	I0706 11:11:01.296599    3912 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0706 11:11:01.756605    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0706 11:11:01.756651    3912 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.82785225s
	I0706 11:11:01.756678    3912 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0706 11:11:02.300339    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0706 11:11:02.300405    3912 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.371765667s
	I0706 11:11:02.300441    3912 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0706 11:11:03.765355    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0706 11:11:03.765406    3912 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.836812083s
	I0706 11:11:03.765439    3912 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0706 11:11:04.937790    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0706 11:11:04.937853    3912 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.009230666s
	I0706 11:11:04.937900    3912 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0706 11:11:05.310618    3912 start.go:365] acquiring machines lock for test-preload-343000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:11:05.311032    3912 start.go:369] acquired machines lock for "test-preload-343000" in 334.25µs
	I0706 11:11:05.311134    3912 start.go:93] Provisioning new machine with config: &{Name:test-preload-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:11:05.311421    3912 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:11:05.320053    3912 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:11:05.368101    3912 start.go:159] libmachine.API.Create for "test-preload-343000" (driver="qemu2")
	I0706 11:11:05.368147    3912 client.go:168] LocalClient.Create starting
	I0706 11:11:05.368273    3912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:11:05.368313    3912 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:05.368341    3912 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:05.368440    3912 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:11:05.368480    3912 main.go:141] libmachine: Decoding PEM data...
	I0706 11:11:05.368497    3912 main.go:141] libmachine: Parsing certificate...
	I0706 11:11:05.369052    3912 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:11:05.651442    3912 main.go:141] libmachine: Creating SSH key...
	I0706 11:11:05.775570    3912 main.go:141] libmachine: Creating Disk image...
	I0706 11:11:05.775577    3912 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:11:05.775731    3912 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2
	I0706 11:11:05.784257    3912 main.go:141] libmachine: STDOUT: 
	I0706 11:11:05.784271    3912 main.go:141] libmachine: STDERR: 
	I0706 11:11:05.784330    3912 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2 +20000M
	I0706 11:11:05.791737    3912 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:11:05.791786    3912 main.go:141] libmachine: STDERR: 
	I0706 11:11:05.791802    3912 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2
	I0706 11:11:05.791810    3912 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:11:05.791866    3912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:9e:7b:9c:22:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/test-preload-343000/disk.qcow2
	I0706 11:11:05.793445    3912 main.go:141] libmachine: STDOUT: 
	I0706 11:11:05.793460    3912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:11:05.793474    3912 client.go:171] LocalClient.Create took 425.321459ms
	I0706 11:11:07.569471    3912 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0706 11:11:07.569548    3912 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.640928834s
	I0706 11:11:07.569601    3912 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0706 11:11:07.569671    3912 cache.go:87] Successfully saved all images to host disk.
	I0706 11:11:07.795721    3912 start.go:128] duration metric: createHost completed in 2.484256792s
	I0706 11:11:07.795790    3912 start.go:83] releasing machines lock for "test-preload-343000", held for 2.484740209s
	W0706 11:11:07.796044    3912 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-343000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-343000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:11:07.804288    3912 out.go:177] 
	W0706 11:11:07.808670    3912 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:11:07.808702    3912 out.go:239] * 
	* 
	W0706 11:11:07.811381    3912 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:11:07.820573    3912 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-343000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-07-06 11:11:07.836012 -0700 PDT m=+895.986651626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-343000 -n test-preload-343000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-343000 -n test-preload-343000: exit status 7 (67.88675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-343000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-343000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-343000
--- FAIL: TestPreload (10.19s)

                                                
                                    
x
+
TestScheduledStopUnix (10.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-790000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-790000 --memory=2048 --driver=qemu2 : exit status 80 (9.987185875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-790000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-790000 in cluster scheduled-stop-790000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-790000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-790000 in cluster scheduled-stop-790000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-07-06 11:11:17.995583 -0700 PDT m=+906.146255084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-790000 -n scheduled-stop-790000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-790000 -n scheduled-stop-790000: exit status 7 (69.114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-790000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-790000
--- FAIL: TestScheduledStopUnix (10.15s)

                                                
                                    
x
+
TestSkaffold (11.75s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3230512614 version
E0706 11:11:19.527663    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-540000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-540000 --memory=2600 --driver=qemu2 : exit status 80 (9.902190791s)

                                                
                                                
-- stdout --
	* [skaffold-540000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-540000 in cluster skaffold-540000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-540000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-540000 in cluster skaffold-540000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-540000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-540000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-07-06 11:11:29.744833 -0700 PDT m=+917.895543168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-540000 -n skaffold-540000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-540000 -n skaffold-540000: exit status 7 (63.552583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-540000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-540000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-540000
--- FAIL: TestSkaffold (11.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (167.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-06 11:14:58.847155 -0700 PDT m=+1126.998543334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-244000 -n running-upgrade-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-244000 -n running-upgrade-244000: exit status 85 (85.912917ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-244000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-244000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-244000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-244000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-244000\"")
helpers_test.go:175: Cleaning up "running-upgrade-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-244000
--- FAIL: TestRunningBinaryUpgrade (167.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-201000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-201000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.766195833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-201000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-201000 in cluster kubernetes-upgrade-201000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-201000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:14:59.253515    4415 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:14:59.253636    4415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:14:59.253639    4415 out.go:309] Setting ErrFile to fd 2...
	I0706 11:14:59.253642    4415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:14:59.253720    4415 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:14:59.254717    4415 out.go:303] Setting JSON to false
	I0706 11:14:59.269945    4415 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2671,"bootTime":1688664628,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:14:59.270027    4415 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:14:59.274503    4415 out.go:177] * [kubernetes-upgrade-201000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:14:59.281595    4415 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:14:59.285545    4415 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:14:59.281650    4415 notify.go:220] Checking for updates...
	I0706 11:14:59.292521    4415 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:14:59.296448    4415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:14:59.299521    4415 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:14:59.303378    4415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:14:59.306743    4415 config.go:182] Loaded profile config "cert-expiration-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:14:59.306814    4415 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:14:59.306853    4415 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:14:59.311526    4415 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:14:59.316501    4415 start.go:297] selected driver: qemu2
	I0706 11:14:59.316510    4415 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:14:59.316517    4415 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:14:59.318620    4415 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:14:59.321476    4415 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:14:59.324538    4415 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 11:14:59.324558    4415 cni.go:84] Creating CNI manager for ""
	I0706 11:14:59.324566    4415 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 11:14:59.324572    4415 start_flags.go:319] config:
	{Name:kubernetes-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-201000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:14:59.329006    4415 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:14:59.336538    4415 out.go:177] * Starting control plane node kubernetes-upgrade-201000 in cluster kubernetes-upgrade-201000
	I0706 11:14:59.340473    4415 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 11:14:59.340506    4415 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 11:14:59.340524    4415 cache.go:57] Caching tarball of preloaded images
	I0706 11:14:59.340596    4415 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:14:59.340601    4415 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0706 11:14:59.340678    4415 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kubernetes-upgrade-201000/config.json ...
	I0706 11:14:59.340694    4415 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kubernetes-upgrade-201000/config.json: {Name:mk0cd26cc2ec0e90ae1de416f1d2863310b05e64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:14:59.340921    4415 start.go:365] acquiring machines lock for kubernetes-upgrade-201000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:14:59.340958    4415 start.go:369] acquired machines lock for "kubernetes-upgrade-201000" in 26.292µs
	I0706 11:14:59.340974    4415 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-201000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:14:59.341016    4415 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:14:59.349453    4415 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:14:59.365727    4415 start.go:159] libmachine.API.Create for "kubernetes-upgrade-201000" (driver="qemu2")
	I0706 11:14:59.365757    4415 client.go:168] LocalClient.Create starting
	I0706 11:14:59.365814    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:14:59.365832    4415 main.go:141] libmachine: Decoding PEM data...
	I0706 11:14:59.365845    4415 main.go:141] libmachine: Parsing certificate...
	I0706 11:14:59.365891    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:14:59.365906    4415 main.go:141] libmachine: Decoding PEM data...
	I0706 11:14:59.365913    4415 main.go:141] libmachine: Parsing certificate...
	I0706 11:14:59.366224    4415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:14:59.477277    4415 main.go:141] libmachine: Creating SSH key...
	I0706 11:14:59.550355    4415 main.go:141] libmachine: Creating Disk image...
	I0706 11:14:59.550365    4415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:14:59.550955    4415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:14:59.560307    4415 main.go:141] libmachine: STDOUT: 
	I0706 11:14:59.560323    4415 main.go:141] libmachine: STDERR: 
	I0706 11:14:59.560374    4415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2 +20000M
	I0706 11:14:59.567526    4415 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:14:59.567547    4415 main.go:141] libmachine: STDERR: 
	I0706 11:14:59.567567    4415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:14:59.567580    4415 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:14:59.567619    4415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:b9:43:f4:5e:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:14:59.569184    4415 main.go:141] libmachine: STDOUT: 
	I0706 11:14:59.569199    4415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:14:59.569215    4415 client.go:171] LocalClient.Create took 203.456ms
	I0706 11:15:01.571412    4415 start.go:128] duration metric: createHost completed in 2.230370125s
	I0706 11:15:01.571486    4415 start.go:83] releasing machines lock for "kubernetes-upgrade-201000", held for 2.230525417s
	W0706 11:15:01.571569    4415 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:01.581659    4415 out.go:177] * Deleting "kubernetes-upgrade-201000" in qemu2 ...
	W0706 11:15:01.600717    4415 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:01.600752    4415 start.go:687] Will try again in 5 seconds ...
	I0706 11:15:06.602822    4415 start.go:365] acquiring machines lock for kubernetes-upgrade-201000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:15:06.602935    4415 start.go:369] acquired machines lock for "kubernetes-upgrade-201000" in 88.041µs
	I0706 11:15:06.602957    4415 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-201000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:15:06.603012    4415 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:15:06.607907    4415 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:15:06.622741    4415 start.go:159] libmachine.API.Create for "kubernetes-upgrade-201000" (driver="qemu2")
	I0706 11:15:06.622760    4415 client.go:168] LocalClient.Create starting
	I0706 11:15:06.622832    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:15:06.622849    4415 main.go:141] libmachine: Decoding PEM data...
	I0706 11:15:06.622858    4415 main.go:141] libmachine: Parsing certificate...
	I0706 11:15:06.622898    4415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:15:06.622912    4415 main.go:141] libmachine: Decoding PEM data...
	I0706 11:15:06.622918    4415 main.go:141] libmachine: Parsing certificate...
	I0706 11:15:06.625188    4415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:15:06.813471    4415 main.go:141] libmachine: Creating SSH key...
	I0706 11:15:06.916787    4415 main.go:141] libmachine: Creating Disk image...
	I0706 11:15:06.916793    4415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:15:06.916951    4415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:15:06.925577    4415 main.go:141] libmachine: STDOUT: 
	I0706 11:15:06.925591    4415 main.go:141] libmachine: STDERR: 
	I0706 11:15:06.925649    4415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2 +20000M
	I0706 11:15:06.932757    4415 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:15:06.932771    4415 main.go:141] libmachine: STDERR: 
	I0706 11:15:06.932788    4415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:15:06.932799    4415 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:15:06.932840    4415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:b0:0b:90:0c:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:15:06.934359    4415 main.go:141] libmachine: STDOUT: 
	I0706 11:15:06.934371    4415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:15:06.934382    4415 client.go:171] LocalClient.Create took 311.616583ms
	I0706 11:15:08.936538    4415 start.go:128] duration metric: createHost completed in 2.33351625s
	I0706 11:15:08.936633    4415 start.go:83] releasing machines lock for "kubernetes-upgrade-201000", held for 2.333664792s
	W0706 11:15:08.936980    4415 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:08.951549    4415 out.go:177] 
	W0706 11:15:08.957578    4415 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:15:08.957603    4415 out.go:239] * 
	* 
	W0706 11:15:08.960415    4415 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:15:08.972509    4415 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-201000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-201000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-201000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-201000 status --format={{.Host}}: exit status 7 (35.522583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-201000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-201000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.16498s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-201000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-201000 in cluster kubernetes-upgrade-201000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-201000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-201000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:15:09.151119    4443 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:15:09.151223    4443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:15:09.151227    4443 out.go:309] Setting ErrFile to fd 2...
	I0706 11:15:09.151229    4443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:15:09.151302    4443 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:15:09.152284    4443 out.go:303] Setting JSON to false
	I0706 11:15:09.167810    4443 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2681,"bootTime":1688664628,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:15:09.167869    4443 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:15:09.171559    4443 out.go:177] * [kubernetes-upgrade-201000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:15:09.178433    4443 notify.go:220] Checking for updates...
	I0706 11:15:09.182479    4443 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:15:09.185495    4443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:15:09.188491    4443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:15:09.191469    4443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:15:09.194431    4443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:15:09.197389    4443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:15:09.200648    4443 config.go:182] Loaded profile config "kubernetes-upgrade-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0706 11:15:09.200890    4443 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:15:09.204489    4443 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:15:09.211470    4443 start.go:297] selected driver: qemu2
	I0706 11:15:09.211475    4443 start.go:944] validating driver "qemu2" against &{Name:kubernetes-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-201000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:15:09.211520    4443 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:15:09.213580    4443 cni.go:84] Creating CNI manager for ""
	I0706 11:15:09.213591    4443 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:15:09.213595    4443 start_flags.go:319] config:
	{Name:kubernetes-upgrade-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-201000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:15:09.217635    4443 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:15:09.222450    4443 out.go:177] * Starting control plane node kubernetes-upgrade-201000 in cluster kubernetes-upgrade-201000
	I0706 11:15:09.226440    4443 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:15:09.226503    4443 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:15:09.226514    4443 cache.go:57] Caching tarball of preloaded images
	I0706 11:15:09.226574    4443 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:15:09.226578    4443 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:15:09.226639    4443 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kubernetes-upgrade-201000/config.json ...
	I0706 11:15:09.226970    4443 start.go:365] acquiring machines lock for kubernetes-upgrade-201000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:15:09.226995    4443 start.go:369] acquired machines lock for "kubernetes-upgrade-201000" in 17.708µs
	I0706 11:15:09.227005    4443 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:15:09.227011    4443 fix.go:54] fixHost starting: 
	I0706 11:15:09.227116    4443 fix.go:102] recreateIfNeeded on kubernetes-upgrade-201000: state=Stopped err=<nil>
	W0706 11:15:09.227123    4443 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:15:09.235437    4443 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-201000" ...
	I0706 11:15:09.236904    4443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:b0:0b:90:0c:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:15:09.238555    4443 main.go:141] libmachine: STDOUT: 
	I0706 11:15:09.238568    4443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:15:09.238593    4443 fix.go:56] fixHost completed within 11.58475ms
	I0706 11:15:09.238596    4443 start.go:83] releasing machines lock for "kubernetes-upgrade-201000", held for 11.597708ms
	W0706 11:15:09.238602    4443 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:15:09.238635    4443 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:09.238639    4443 start.go:687] Will try again in 5 seconds ...
	I0706 11:15:14.240757    4443 start.go:365] acquiring machines lock for kubernetes-upgrade-201000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:15:14.240857    4443 start.go:369] acquired machines lock for "kubernetes-upgrade-201000" in 68.792µs
	I0706 11:15:14.240889    4443 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:15:14.240893    4443 fix.go:54] fixHost starting: 
	I0706 11:15:14.241074    4443 fix.go:102] recreateIfNeeded on kubernetes-upgrade-201000: state=Stopped err=<nil>
	W0706 11:15:14.241081    4443 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:15:14.248890    4443 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-201000" ...
	I0706 11:15:14.252919    4443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:b0:0b:90:0c:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubernetes-upgrade-201000/disk.qcow2
	I0706 11:15:14.255295    4443 main.go:141] libmachine: STDOUT: 
	I0706 11:15:14.255320    4443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:15:14.255349    4443 fix.go:56] fixHost completed within 14.455708ms
	I0706 11:15:14.255354    4443 start.go:83] releasing machines lock for "kubernetes-upgrade-201000", held for 14.490708ms
	W0706 11:15:14.255425    4443 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-201000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-201000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:14.266727    4443 out.go:177] 
	W0706 11:15:14.269841    4443 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:15:14.269848    4443 out.go:239] * 
	* 
	W0706 11:15:14.270536    4443 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:15:14.280771    4443 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-201000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-201000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-201000 version --output=json: exit status 1 (35.981125ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-201000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-07-06 11:15:14.326432 -0700 PDT m=+1142.477870709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-201000 -n kubernetes-upgrade-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-201000 -n kubernetes-upgrade-201000: exit status 7 (29.794834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-201000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-201000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-201000
--- FAIL: TestKubernetesUpgrade (15.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=15452
- KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4151350586/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.58s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=15452
- KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2311474556/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (175.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (175.83s)

                                                
                                    
x
+
TestPause/serial/Start (9.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-680000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-680000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.9111755s)

                                                
                                                
-- stdout --
	* [pause-680000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-680000 in cluster pause-680000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-680000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-680000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-680000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-680000 -n pause-680000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-680000 -n pause-680000: exit status 7 (69.02175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-680000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-244000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-244000 --driver=qemu2 : exit status 80 (9.645019083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-244000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-244000 in cluster NoKubernetes-244000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-244000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000: exit status 7 (68.155416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --driver=qemu2 : exit status 80 (5.230115125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-244000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-244000
	* Restarting existing qemu2 VM for "NoKubernetes-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-244000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000: exit status 7 (69.600667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --driver=qemu2 : exit status 80 (5.241208708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-244000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-244000
	* Restarting existing qemu2 VM for "NoKubernetes-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-244000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000: exit status 7 (69.534459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-244000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-244000 --driver=qemu2 : exit status 80 (5.237833875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-244000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-244000
	* Restarting existing qemu2 VM for "NoKubernetes-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-244000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-244000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-244000 -n NoKubernetes-244000: exit status 7 (69.901333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0706 11:15:51.817881    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.695297833s)

                                                
                                                
-- stdout --
	* [auto-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-264000 in cluster auto-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:15:50.652383    4555 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:15:50.652524    4555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:15:50.652527    4555 out.go:309] Setting ErrFile to fd 2...
	I0706 11:15:50.652529    4555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:15:50.652595    4555 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:15:50.653619    4555 out.go:303] Setting JSON to false
	I0706 11:15:50.668833    4555 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2722,"bootTime":1688664628,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:15:50.668889    4555 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:15:50.673069    4555 out.go:177] * [auto-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:15:50.681016    4555 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:15:50.681076    4555 notify.go:220] Checking for updates...
	I0706 11:15:50.687993    4555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:15:50.690961    4555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:15:50.694012    4555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:15:50.697060    4555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:15:50.700014    4555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:15:50.703325    4555 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:15:50.703368    4555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:15:50.707941    4555 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:15:50.714942    4555 start.go:297] selected driver: qemu2
	I0706 11:15:50.714951    4555 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:15:50.714958    4555 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:15:50.717002    4555 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:15:50.721014    4555 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:15:50.724105    4555 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:15:50.724128    4555 cni.go:84] Creating CNI manager for ""
	I0706 11:15:50.724136    4555 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:15:50.724143    4555 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:15:50.724150    4555 start_flags.go:319] config:
	{Name:auto-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0706 11:15:50.728302    4555 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:15:50.732100    4555 out.go:177] * Starting control plane node auto-264000 in cluster auto-264000
	I0706 11:15:50.735988    4555 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:15:50.736013    4555 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:15:50.736030    4555 cache.go:57] Caching tarball of preloaded images
	I0706 11:15:50.736082    4555 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:15:50.736088    4555 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:15:50.736155    4555 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/auto-264000/config.json ...
	I0706 11:15:50.736167    4555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/auto-264000/config.json: {Name:mke4f846abf6df896c4923171053ec1c9c55ccf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:15:50.736390    4555 start.go:365] acquiring machines lock for auto-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:15:50.736419    4555 start.go:369] acquired machines lock for "auto-264000" in 23.542µs
	I0706 11:15:50.736430    4555 start.go:93] Provisioning new machine with config: &{Name:auto-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:auto-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:15:50.736464    4555 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:15:50.744969    4555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:15:50.760888    4555 start.go:159] libmachine.API.Create for "auto-264000" (driver="qemu2")
	I0706 11:15:50.760914    4555 client.go:168] LocalClient.Create starting
	I0706 11:15:50.760966    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:15:50.760987    4555 main.go:141] libmachine: Decoding PEM data...
	I0706 11:15:50.760998    4555 main.go:141] libmachine: Parsing certificate...
	I0706 11:15:50.761042    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:15:50.761057    4555 main.go:141] libmachine: Decoding PEM data...
	I0706 11:15:50.761063    4555 main.go:141] libmachine: Parsing certificate...
	I0706 11:15:50.761401    4555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:15:50.877992    4555 main.go:141] libmachine: Creating SSH key...
	I0706 11:15:50.959668    4555 main.go:141] libmachine: Creating Disk image...
	I0706 11:15:50.959675    4555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:15:50.959825    4555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2
	I0706 11:15:50.968238    4555 main.go:141] libmachine: STDOUT: 
	I0706 11:15:50.968252    4555 main.go:141] libmachine: STDERR: 
	I0706 11:15:50.968305    4555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2 +20000M
	I0706 11:15:50.975506    4555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:15:50.975517    4555 main.go:141] libmachine: STDERR: 
	I0706 11:15:50.975536    4555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2
	I0706 11:15:50.975542    4555 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:15:50.975573    4555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:d2:40:9a:14:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2
	I0706 11:15:50.977121    4555 main.go:141] libmachine: STDOUT: 
	I0706 11:15:50.977135    4555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:15:50.977150    4555 client.go:171] LocalClient.Create took 216.230917ms
	I0706 11:15:52.979361    4555 start.go:128] duration metric: createHost completed in 2.242872542s
	I0706 11:15:52.979453    4555 start.go:83] releasing machines lock for "auto-264000", held for 2.243031959s
	W0706 11:15:52.979567    4555 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:52.986988    4555 out.go:177] * Deleting "auto-264000" in qemu2 ...
	W0706 11:15:53.008377    4555 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:15:53.008408    4555 start.go:687] Will try again in 5 seconds ...
	I0706 11:15:58.010644    4555 start.go:365] acquiring machines lock for auto-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:15:58.011184    4555 start.go:369] acquired machines lock for "auto-264000" in 427.958µs
	I0706 11:15:58.011292    4555 start.go:93] Provisioning new machine with config: &{Name:auto-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:auto-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:15:58.011595    4555 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:15:58.020199    4555 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:15:58.068533    4555 start.go:159] libmachine.API.Create for "auto-264000" (driver="qemu2")
	I0706 11:15:58.068581    4555 client.go:168] LocalClient.Create starting
	I0706 11:15:58.068700    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:15:58.068767    4555 main.go:141] libmachine: Decoding PEM data...
	I0706 11:15:58.068787    4555 main.go:141] libmachine: Parsing certificate...
	I0706 11:15:58.068881    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:15:58.068917    4555 main.go:141] libmachine: Decoding PEM data...
	I0706 11:15:58.068929    4555 main.go:141] libmachine: Parsing certificate...
	I0706 11:15:58.069473    4555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:15:58.195968    4555 main.go:141] libmachine: Creating SSH key...
	I0706 11:15:58.263136    4555 main.go:141] libmachine: Creating Disk image...
	I0706 11:15:58.263142    4555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:15:58.263287    4555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2
	I0706 11:15:58.271757    4555 main.go:141] libmachine: STDOUT: 
	I0706 11:15:58.271770    4555 main.go:141] libmachine: STDERR: 
	I0706 11:15:58.271828    4555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2 +20000M
	I0706 11:15:58.278950    4555 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:15:58.278963    4555 main.go:141] libmachine: STDERR: 
	I0706 11:15:58.278975    4555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2
	I0706 11:15:58.278980    4555 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:15:58.279024    4555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:9d:b2:6c:fe:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/auto-264000/disk.qcow2
	I0706 11:15:58.280566    4555 main.go:141] libmachine: STDOUT: 
	I0706 11:15:58.280586    4555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:15:58.280600    4555 client.go:171] LocalClient.Create took 212.01375ms
	I0706 11:16:00.282788    4555 start.go:128] duration metric: createHost completed in 2.271132125s
	I0706 11:16:00.282900    4555 start.go:83] releasing machines lock for "auto-264000", held for 2.271698708s
	W0706 11:16:00.283414    4555 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:00.291984    4555 out.go:177] 
	W0706 11:16:00.296097    4555 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:16:00.296154    4555 out.go:239] * 
	* 
	W0706 11:16:00.298865    4555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:16:00.308024    4555 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.799918542s)

                                                
                                                
-- stdout --
	* [kindnet-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-264000 in cluster kindnet-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:16:02.416662    4668 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:16:02.416774    4668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:02.416776    4668 out.go:309] Setting ErrFile to fd 2...
	I0706 11:16:02.416779    4668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:02.416849    4668 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:16:02.417908    4668 out.go:303] Setting JSON to false
	I0706 11:16:02.433081    4668 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2734,"bootTime":1688664628,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:16:02.433151    4668 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:16:02.440982    4668 out.go:177] * [kindnet-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:16:02.444940    4668 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:16:02.445000    4668 notify.go:220] Checking for updates...
	I0706 11:16:02.448845    4668 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:16:02.451957    4668 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:16:02.455012    4668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:16:02.457932    4668 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:16:02.460951    4668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:16:02.464307    4668 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:16:02.464351    4668 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:16:02.467815    4668 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:16:02.474925    4668 start.go:297] selected driver: qemu2
	I0706 11:16:02.474931    4668 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:16:02.474936    4668 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:16:02.476891    4668 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:16:02.478146    4668 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:16:02.480957    4668 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:16:02.480972    4668 cni.go:84] Creating CNI manager for "kindnet"
	I0706 11:16:02.480975    4668 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0706 11:16:02.480978    4668 start_flags.go:319] config:
	{Name:kindnet-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0706 11:16:02.484952    4668 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:16:02.491823    4668 out.go:177] * Starting control plane node kindnet-264000 in cluster kindnet-264000
	I0706 11:16:02.495866    4668 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:16:02.495890    4668 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:16:02.495908    4668 cache.go:57] Caching tarball of preloaded images
	I0706 11:16:02.495963    4668 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:16:02.495968    4668 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:16:02.496032    4668 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kindnet-264000/config.json ...
	I0706 11:16:02.496044    4668 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kindnet-264000/config.json: {Name:mk3c5a42e7330389284a583ac45a919230e327f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:16:02.496243    4668 start.go:365] acquiring machines lock for kindnet-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:02.496270    4668 start.go:369] acquired machines lock for "kindnet-264000" in 22.417µs
	I0706 11:16:02.496281    4668 start.go:93] Provisioning new machine with config: &{Name:kindnet-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:02.496310    4668 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:02.504952    4668 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:02.519789    4668 start.go:159] libmachine.API.Create for "kindnet-264000" (driver="qemu2")
	I0706 11:16:02.519812    4668 client.go:168] LocalClient.Create starting
	I0706 11:16:02.519865    4668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:02.519882    4668 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:02.519894    4668 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:02.519939    4668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:02.519953    4668 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:02.519964    4668 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:02.520256    4668 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:02.643232    4668 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:02.726345    4668 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:02.726358    4668 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:02.726512    4668 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2
	I0706 11:16:02.734887    4668 main.go:141] libmachine: STDOUT: 
	I0706 11:16:02.734901    4668 main.go:141] libmachine: STDERR: 
	I0706 11:16:02.734973    4668 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2 +20000M
	I0706 11:16:02.742290    4668 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:02.742303    4668 main.go:141] libmachine: STDERR: 
	I0706 11:16:02.742321    4668 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2
	I0706 11:16:02.742335    4668 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:02.742378    4668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:55:bc:02:13:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2
	I0706 11:16:02.743928    4668 main.go:141] libmachine: STDOUT: 
	I0706 11:16:02.743942    4668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:02.743959    4668 client.go:171] LocalClient.Create took 224.143958ms
	I0706 11:16:04.746126    4668 start.go:128] duration metric: createHost completed in 2.249806459s
	I0706 11:16:04.746227    4668 start.go:83] releasing machines lock for "kindnet-264000", held for 2.24992425s
	W0706 11:16:04.746299    4668 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:04.757457    4668 out.go:177] * Deleting "kindnet-264000" in qemu2 ...
	W0706 11:16:04.777726    4668 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:04.777755    4668 start.go:687] Will try again in 5 seconds ...
	I0706 11:16:09.779388    4668 start.go:365] acquiring machines lock for kindnet-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:09.779899    4668 start.go:369] acquired machines lock for "kindnet-264000" in 415.208µs
	I0706 11:16:09.780007    4668 start.go:93] Provisioning new machine with config: &{Name:kindnet-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:09.780304    4668 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:09.789630    4668 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:09.836772    4668 start.go:159] libmachine.API.Create for "kindnet-264000" (driver="qemu2")
	I0706 11:16:09.836817    4668 client.go:168] LocalClient.Create starting
	I0706 11:16:09.836946    4668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:09.837004    4668 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:09.837024    4668 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:09.837112    4668 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:09.837141    4668 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:09.837160    4668 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:09.837833    4668 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:09.969244    4668 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:10.129983    4668 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:10.129989    4668 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:10.130168    4668 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2
	I0706 11:16:10.139097    4668 main.go:141] libmachine: STDOUT: 
	I0706 11:16:10.139111    4668 main.go:141] libmachine: STDERR: 
	I0706 11:16:10.139182    4668 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2 +20000M
	I0706 11:16:10.146392    4668 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:10.146405    4668 main.go:141] libmachine: STDERR: 
	I0706 11:16:10.146424    4668 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2
	I0706 11:16:10.146432    4668 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:10.146473    4668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6c:e0:b4:28:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kindnet-264000/disk.qcow2
	I0706 11:16:10.148048    4668 main.go:141] libmachine: STDOUT: 
	I0706 11:16:10.148061    4668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:10.148074    4668 client.go:171] LocalClient.Create took 311.254417ms
	I0706 11:16:12.150260    4668 start.go:128] duration metric: createHost completed in 2.369922625s
	I0706 11:16:12.150389    4668 start.go:83] releasing machines lock for "kindnet-264000", held for 2.370427333s
	W0706 11:16:12.151019    4668 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:12.159207    4668 out.go:177] 
	W0706 11:16:12.163722    4668 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:16:12.163748    4668 out.go:239] * 
	* 
	W0706 11:16:12.166668    4668 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:16:12.175651    4668 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.693459s)

                                                
                                                
-- stdout --
	* [calico-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-264000 in cluster calico-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:16:14.379092    4787 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:16:14.379215    4787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:14.379217    4787 out.go:309] Setting ErrFile to fd 2...
	I0706 11:16:14.379220    4787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:14.379286    4787 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:16:14.380285    4787 out.go:303] Setting JSON to false
	I0706 11:16:14.395586    4787 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2746,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:16:14.395635    4787 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:16:14.401188    4787 out.go:177] * [calico-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:16:14.409035    4787 notify.go:220] Checking for updates...
	I0706 11:16:14.413095    4787 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:16:14.416168    4787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:16:14.419084    4787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:16:14.422124    4787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:16:14.425152    4787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:16:14.428030    4787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:16:14.431401    4787 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:16:14.431446    4787 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:16:14.435145    4787 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:16:14.442049    4787 start.go:297] selected driver: qemu2
	I0706 11:16:14.442053    4787 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:16:14.442058    4787 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:16:14.444047    4787 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:16:14.447082    4787 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:16:14.450085    4787 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:16:14.450106    4787 cni.go:84] Creating CNI manager for "calico"
	I0706 11:16:14.450112    4787 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0706 11:16:14.450119    4787 start_flags.go:319] config:
	{Name:calico-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0706 11:16:14.454411    4787 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:16:14.457085    4787 out.go:177] * Starting control plane node calico-264000 in cluster calico-264000
	I0706 11:16:14.465086    4787 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:16:14.465110    4787 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:16:14.465129    4787 cache.go:57] Caching tarball of preloaded images
	I0706 11:16:14.465206    4787 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:16:14.465211    4787 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:16:14.465280    4787 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/calico-264000/config.json ...
	I0706 11:16:14.465295    4787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/calico-264000/config.json: {Name:mk7278c8f8f811365c709a4250b91d041c63f3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:16:14.465526    4787 start.go:365] acquiring machines lock for calico-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:14.465567    4787 start.go:369] acquired machines lock for "calico-264000" in 33µs
	I0706 11:16:14.465580    4787 start.go:93] Provisioning new machine with config: &{Name:calico-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:calico-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:14.465625    4787 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:14.474062    4787 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:14.489954    4787 start.go:159] libmachine.API.Create for "calico-264000" (driver="qemu2")
	I0706 11:16:14.489975    4787 client.go:168] LocalClient.Create starting
	I0706 11:16:14.490037    4787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:14.490056    4787 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:14.490073    4787 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:14.490109    4787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:14.490124    4787 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:14.490132    4787 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:14.490457    4787 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:14.642826    4787 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:14.696542    4787 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:14.696547    4787 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:14.696690    4787 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2
	I0706 11:16:14.705072    4787 main.go:141] libmachine: STDOUT: 
	I0706 11:16:14.705086    4787 main.go:141] libmachine: STDERR: 
	I0706 11:16:14.705134    4787 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2 +20000M
	I0706 11:16:14.712444    4787 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:14.712462    4787 main.go:141] libmachine: STDERR: 
	I0706 11:16:14.712478    4787 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2
	I0706 11:16:14.712483    4787 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:14.712513    4787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b9:19:ea:54:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2
	I0706 11:16:14.714045    4787 main.go:141] libmachine: STDOUT: 
	I0706 11:16:14.714059    4787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:14.714077    4787 client.go:171] LocalClient.Create took 224.097041ms
	I0706 11:16:16.716502    4787 start.go:128] duration metric: createHost completed in 2.250864s
	I0706 11:16:16.716544    4787 start.go:83] releasing machines lock for "calico-264000", held for 2.250975875s
	W0706 11:16:16.716590    4787 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:16.723938    4787 out.go:177] * Deleting "calico-264000" in qemu2 ...
	W0706 11:16:16.747117    4787 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:16.747142    4787 start.go:687] Will try again in 5 seconds ...
	I0706 11:16:21.749406    4787 start.go:365] acquiring machines lock for calico-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:21.749948    4787 start.go:369] acquired machines lock for "calico-264000" in 438.792µs
	I0706 11:16:21.750078    4787 start.go:93] Provisioning new machine with config: &{Name:calico-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:calico-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:21.750319    4787 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:21.761024    4787 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:21.810795    4787 start.go:159] libmachine.API.Create for "calico-264000" (driver="qemu2")
	I0706 11:16:21.810851    4787 client.go:168] LocalClient.Create starting
	I0706 11:16:21.810989    4787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:21.811045    4787 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:21.811069    4787 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:21.811147    4787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:21.811179    4787 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:21.811197    4787 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:21.811717    4787 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:21.941421    4787 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:21.986447    4787 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:21.986452    4787 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:21.986602    4787 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2
	I0706 11:16:21.995135    4787 main.go:141] libmachine: STDOUT: 
	I0706 11:16:21.995150    4787 main.go:141] libmachine: STDERR: 
	I0706 11:16:21.995213    4787 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2 +20000M
	I0706 11:16:22.002394    4787 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:22.002407    4787 main.go:141] libmachine: STDERR: 
	I0706 11:16:22.002418    4787 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2
	I0706 11:16:22.002423    4787 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:22.002467    4787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ec:18:91:1b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/calico-264000/disk.qcow2
	I0706 11:16:22.003982    4787 main.go:141] libmachine: STDOUT: 
	I0706 11:16:22.003995    4787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:22.004006    4787 client.go:171] LocalClient.Create took 193.147583ms
	I0706 11:16:24.006230    4787 start.go:128] duration metric: createHost completed in 2.255822042s
	I0706 11:16:24.006290    4787 start.go:83] releasing machines lock for "calico-264000", held for 2.256323708s
	W0706 11:16:24.006678    4787 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:24.015376    4787 out.go:177] 
	W0706 11:16:24.019408    4787 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:16:24.019455    4787 out.go:239] * 
	* 
	W0706 11:16:24.022213    4787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:16:24.032298    4787 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.704559709s)

                                                
                                                
-- stdout --
	* [custom-flannel-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-264000 in cluster custom-flannel-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:16:26.364193    4905 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:16:26.364319    4905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:26.364321    4905 out.go:309] Setting ErrFile to fd 2...
	I0706 11:16:26.364324    4905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:26.364400    4905 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:16:26.365514    4905 out.go:303] Setting JSON to false
	I0706 11:16:26.380683    4905 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2758,"bootTime":1688664628,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:16:26.380757    4905 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:16:26.385566    4905 out.go:177] * [custom-flannel-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:16:26.392510    4905 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:16:26.396625    4905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:16:26.392585    4905 notify.go:220] Checking for updates...
	I0706 11:16:26.402496    4905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:16:26.405542    4905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:16:26.408493    4905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:16:26.411542    4905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:16:26.414818    4905 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:16:26.414859    4905 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:16:26.418397    4905 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:16:26.425490    4905 start.go:297] selected driver: qemu2
	I0706 11:16:26.425496    4905 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:16:26.425502    4905 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:16:26.427478    4905 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:16:26.428925    4905 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:16:26.431603    4905 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:16:26.431635    4905 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0706 11:16:26.431651    4905 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0706 11:16:26.431655    4905 start_flags.go:319] config:
	{Name:custom-flannel-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:16:26.435789    4905 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:16:26.442392    4905 out.go:177] * Starting control plane node custom-flannel-264000 in cluster custom-flannel-264000
	I0706 11:16:26.446540    4905 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:16:26.446558    4905 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:16:26.446569    4905 cache.go:57] Caching tarball of preloaded images
	I0706 11:16:26.446618    4905 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:16:26.446623    4905 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:16:26.446889    4905 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/custom-flannel-264000/config.json ...
	I0706 11:16:26.446912    4905 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/custom-flannel-264000/config.json: {Name:mk31adee4a5da69d18fce6c1a6964a6847b9ece0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:16:26.447128    4905 start.go:365] acquiring machines lock for custom-flannel-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:26.447158    4905 start.go:369] acquired machines lock for "custom-flannel-264000" in 22.459µs
	I0706 11:16:26.447168    4905 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:26.447222    4905 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:26.451520    4905 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:26.467039    4905 start.go:159] libmachine.API.Create for "custom-flannel-264000" (driver="qemu2")
	I0706 11:16:26.467059    4905 client.go:168] LocalClient.Create starting
	I0706 11:16:26.467112    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:26.467132    4905 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:26.467146    4905 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:26.467193    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:26.467207    4905 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:26.467214    4905 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:26.467511    4905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:26.583177    4905 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:26.654948    4905 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:26.654957    4905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:26.655112    4905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2
	I0706 11:16:26.663703    4905 main.go:141] libmachine: STDOUT: 
	I0706 11:16:26.663718    4905 main.go:141] libmachine: STDERR: 
	I0706 11:16:26.663797    4905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2 +20000M
	I0706 11:16:26.671098    4905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:26.671112    4905 main.go:141] libmachine: STDERR: 
	I0706 11:16:26.671129    4905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2
	I0706 11:16:26.671138    4905 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:26.671172    4905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:ce:6c:9d:98:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2
	I0706 11:16:26.672694    4905 main.go:141] libmachine: STDOUT: 
	I0706 11:16:26.672703    4905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:26.672721    4905 client.go:171] LocalClient.Create took 205.657208ms
	I0706 11:16:28.674884    4905 start.go:128] duration metric: createHost completed in 2.227652125s
	I0706 11:16:28.674942    4905 start.go:83] releasing machines lock for "custom-flannel-264000", held for 2.2277835s
	W0706 11:16:28.675027    4905 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:28.683440    4905 out.go:177] * Deleting "custom-flannel-264000" in qemu2 ...
	W0706 11:16:28.701995    4905 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:28.702024    4905 start.go:687] Will try again in 5 seconds ...
	I0706 11:16:33.704328    4905 start.go:365] acquiring machines lock for custom-flannel-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:33.704917    4905 start.go:369] acquired machines lock for "custom-flannel-264000" in 469.292µs
	I0706 11:16:33.705038    4905 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:33.705334    4905 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:33.714084    4905 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:33.762399    4905 start.go:159] libmachine.API.Create for "custom-flannel-264000" (driver="qemu2")
	I0706 11:16:33.762458    4905 client.go:168] LocalClient.Create starting
	I0706 11:16:33.762567    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:33.762606    4905 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:33.762625    4905 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:33.762706    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:33.762733    4905 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:33.762759    4905 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:33.763333    4905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:33.903144    4905 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:33.981872    4905 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:33.981877    4905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:33.982032    4905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2
	I0706 11:16:33.990644    4905 main.go:141] libmachine: STDOUT: 
	I0706 11:16:33.990658    4905 main.go:141] libmachine: STDERR: 
	I0706 11:16:33.990701    4905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2 +20000M
	I0706 11:16:33.997852    4905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:33.997864    4905 main.go:141] libmachine: STDERR: 
	I0706 11:16:33.997875    4905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2
	I0706 11:16:33.997879    4905 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:33.997923    4905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:31:79:ff:e6:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/custom-flannel-264000/disk.qcow2
	I0706 11:16:33.999405    4905 main.go:141] libmachine: STDOUT: 
	I0706 11:16:33.999420    4905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:33.999430    4905 client.go:171] LocalClient.Create took 236.96575ms
	I0706 11:16:36.001588    4905 start.go:128] duration metric: createHost completed in 2.296240458s
	I0706 11:16:36.001651    4905 start.go:83] releasing machines lock for "custom-flannel-264000", held for 2.29671475s
	W0706 11:16:36.002051    4905 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:36.011675    4905 out.go:177] 
	W0706 11:16:36.015773    4905 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:16:36.015798    4905 out.go:239] * 
	* 
	W0706 11:16:36.018664    4905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:16:36.027730    4905 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.743272958s)

                                                
                                                
-- stdout --
	* [false-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-264000 in cluster false-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:16:38.347676    5024 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:16:38.347777    5024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:38.347780    5024 out.go:309] Setting ErrFile to fd 2...
	I0706 11:16:38.347784    5024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:38.347858    5024 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:16:38.349021    5024 out.go:303] Setting JSON to false
	I0706 11:16:38.365029    5024 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2770,"bootTime":1688664628,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:16:38.365097    5024 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:16:38.370575    5024 out.go:177] * [false-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:16:38.376538    5024 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:16:38.376594    5024 notify.go:220] Checking for updates...
	I0706 11:16:38.383525    5024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:16:38.386527    5024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:16:38.389505    5024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:16:38.392401    5024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:16:38.395498    5024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:16:38.398882    5024 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:16:38.398924    5024 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:16:38.402524    5024 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:16:38.409505    5024 start.go:297] selected driver: qemu2
	I0706 11:16:38.409513    5024 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:16:38.409520    5024 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:16:38.411562    5024 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:16:38.412987    5024 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:16:38.415541    5024 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:16:38.415558    5024 cni.go:84] Creating CNI manager for "false"
	I0706 11:16:38.415563    5024 start_flags.go:319] config:
	{Name:false-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0}
	I0706 11:16:38.419624    5024 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:16:38.426528    5024 out.go:177] * Starting control plane node false-264000 in cluster false-264000
	I0706 11:16:38.430464    5024 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:16:38.430488    5024 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:16:38.430510    5024 cache.go:57] Caching tarball of preloaded images
	I0706 11:16:38.430560    5024 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:16:38.430566    5024 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:16:38.430629    5024 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/false-264000/config.json ...
	I0706 11:16:38.430641    5024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/false-264000/config.json: {Name:mk0eefec5ee6c0a12452d650f73ddc058379a9a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:16:38.430842    5024 start.go:365] acquiring machines lock for false-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:38.430871    5024 start.go:369] acquired machines lock for "false-264000" in 23.25µs
	I0706 11:16:38.430882    5024 start.go:93] Provisioning new machine with config: &{Name:false-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:false-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:38.430921    5024 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:38.435475    5024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:38.451455    5024 start.go:159] libmachine.API.Create for "false-264000" (driver="qemu2")
	I0706 11:16:38.451487    5024 client.go:168] LocalClient.Create starting
	I0706 11:16:38.451542    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:38.451564    5024 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:38.451573    5024 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:38.451626    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:38.451640    5024 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:38.451648    5024 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:38.451969    5024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:38.576791    5024 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:38.707640    5024 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:38.707648    5024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:38.707807    5024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2
	I0706 11:16:38.716275    5024 main.go:141] libmachine: STDOUT: 
	I0706 11:16:38.716290    5024 main.go:141] libmachine: STDERR: 
	I0706 11:16:38.716357    5024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2 +20000M
	I0706 11:16:38.723433    5024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:38.723460    5024 main.go:141] libmachine: STDERR: 
	I0706 11:16:38.723489    5024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2
	I0706 11:16:38.723494    5024 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:38.723536    5024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d1:c1:e1:8f:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2
	I0706 11:16:38.725097    5024 main.go:141] libmachine: STDOUT: 
	I0706 11:16:38.725110    5024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:38.725129    5024 client.go:171] LocalClient.Create took 273.63975ms
	I0706 11:16:40.727319    5024 start.go:128] duration metric: createHost completed in 2.296374458s
	I0706 11:16:40.727421    5024 start.go:83] releasing machines lock for "false-264000", held for 2.296548041s
	W0706 11:16:40.727490    5024 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:40.737822    5024 out.go:177] * Deleting "false-264000" in qemu2 ...
	W0706 11:16:40.757760    5024 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:40.757789    5024 start.go:687] Will try again in 5 seconds ...
	I0706 11:16:45.759999    5024 start.go:365] acquiring machines lock for false-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:45.760519    5024 start.go:369] acquired machines lock for "false-264000" in 425.166µs
	I0706 11:16:45.760636    5024 start.go:93] Provisioning new machine with config: &{Name:false-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:false-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:45.760988    5024 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:45.772764    5024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:45.820165    5024 start.go:159] libmachine.API.Create for "false-264000" (driver="qemu2")
	I0706 11:16:45.820206    5024 client.go:168] LocalClient.Create starting
	I0706 11:16:45.820373    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:45.820423    5024 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:45.820446    5024 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:45.820538    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:45.820568    5024 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:45.820579    5024 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:45.821208    5024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:45.953829    5024 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:46.003279    5024 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:46.003284    5024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:46.003433    5024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2
	I0706 11:16:46.012067    5024 main.go:141] libmachine: STDOUT: 
	I0706 11:16:46.012083    5024 main.go:141] libmachine: STDERR: 
	I0706 11:16:46.012149    5024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2 +20000M
	I0706 11:16:46.019294    5024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:46.019310    5024 main.go:141] libmachine: STDERR: 
	I0706 11:16:46.019330    5024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2
	I0706 11:16:46.019335    5024 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:46.019366    5024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:6e:52:a9:01:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/false-264000/disk.qcow2
	I0706 11:16:46.020881    5024 main.go:141] libmachine: STDOUT: 
	I0706 11:16:46.020894    5024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:46.020906    5024 client.go:171] LocalClient.Create took 200.696583ms
	I0706 11:16:48.023105    5024 start.go:128] duration metric: createHost completed in 2.262088584s
	I0706 11:16:48.023161    5024 start.go:83] releasing machines lock for "false-264000", held for 2.262625709s
	W0706 11:16:48.023578    5024 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:48.034194    5024 out.go:177] 
	W0706 11:16:48.038270    5024 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:16:48.038294    5024 out.go:239] * 
	* 
	W0706 11:16:48.041206    5024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:16:48.051162    5024 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.658310625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-264000 in cluster enable-default-cni-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:16:50.201148    5135 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:16:50.201275    5135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:50.201278    5135 out.go:309] Setting ErrFile to fd 2...
	I0706 11:16:50.201280    5135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:16:50.201350    5135 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:16:50.202379    5135 out.go:303] Setting JSON to false
	I0706 11:16:50.217562    5135 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2782,"bootTime":1688664628,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:16:50.217636    5135 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:16:50.222359    5135 out.go:177] * [enable-default-cni-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:16:50.230462    5135 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:16:50.234382    5135 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:16:50.230591    5135 notify.go:220] Checking for updates...
	I0706 11:16:50.240390    5135 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:16:50.248413    5135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:16:50.251482    5135 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:16:50.255345    5135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:16:50.258693    5135 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:16:50.258739    5135 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:16:50.263231    5135 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:16:50.270447    5135 start.go:297] selected driver: qemu2
	I0706 11:16:50.270454    5135 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:16:50.270464    5135 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:16:50.272463    5135 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:16:50.275385    5135 out.go:177] * Automatically selected the socket_vmnet network
	E0706 11:16:50.278575    5135 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0706 11:16:50.278594    5135 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:16:50.278611    5135 cni.go:84] Creating CNI manager for "bridge"
	I0706 11:16:50.278615    5135 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:16:50.278621    5135 start_flags.go:319] config:
	{Name:enable-default-cni-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:16:50.282813    5135 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:16:50.289397    5135 out.go:177] * Starting control plane node enable-default-cni-264000 in cluster enable-default-cni-264000
	I0706 11:16:50.293402    5135 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:16:50.293426    5135 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:16:50.293442    5135 cache.go:57] Caching tarball of preloaded images
	I0706 11:16:50.293517    5135 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:16:50.293522    5135 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:16:50.293588    5135 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/enable-default-cni-264000/config.json ...
	I0706 11:16:50.293605    5135 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/enable-default-cni-264000/config.json: {Name:mk2d4f181ae3ef7c51a24c7426d0b8ceece6ac19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:16:50.293803    5135 start.go:365] acquiring machines lock for enable-default-cni-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:50.293834    5135 start.go:369] acquired machines lock for "enable-default-cni-264000" in 24.792µs
	I0706 11:16:50.293846    5135 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:50.293875    5135 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:50.298428    5135 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:50.314885    5135 start.go:159] libmachine.API.Create for "enable-default-cni-264000" (driver="qemu2")
	I0706 11:16:50.314902    5135 client.go:168] LocalClient.Create starting
	I0706 11:16:50.314954    5135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:50.314974    5135 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:50.314986    5135 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:50.315019    5135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:50.315034    5135 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:50.315041    5135 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:50.315389    5135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:50.431437    5135 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:50.495308    5135 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:50.495315    5135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:50.495472    5135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2
	I0706 11:16:50.503923    5135 main.go:141] libmachine: STDOUT: 
	I0706 11:16:50.503937    5135 main.go:141] libmachine: STDERR: 
	I0706 11:16:50.503986    5135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2 +20000M
	I0706 11:16:50.511079    5135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:50.511091    5135 main.go:141] libmachine: STDERR: 
	I0706 11:16:50.511108    5135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2
	I0706 11:16:50.511114    5135 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:50.511147    5135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:40:d7:0b:22:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2
	I0706 11:16:50.512659    5135 main.go:141] libmachine: STDOUT: 
	I0706 11:16:50.512673    5135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:50.512689    5135 client.go:171] LocalClient.Create took 197.7845ms
	I0706 11:16:52.514856    5135 start.go:128] duration metric: createHost completed in 2.220968583s
	I0706 11:16:52.514913    5135 start.go:83] releasing machines lock for "enable-default-cni-264000", held for 2.221077583s
	W0706 11:16:52.515003    5135 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:52.523294    5135 out.go:177] * Deleting "enable-default-cni-264000" in qemu2 ...
	W0706 11:16:52.547653    5135 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:52.547678    5135 start.go:687] Will try again in 5 seconds ...
	I0706 11:16:57.549942    5135 start.go:365] acquiring machines lock for enable-default-cni-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:16:57.550383    5135 start.go:369] acquired machines lock for "enable-default-cni-264000" in 338.083µs
	I0706 11:16:57.550517    5135 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:16:57.550835    5135 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:16:57.560520    5135 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:16:57.608384    5135 start.go:159] libmachine.API.Create for "enable-default-cni-264000" (driver="qemu2")
	I0706 11:16:57.608426    5135 client.go:168] LocalClient.Create starting
	I0706 11:16:57.608572    5135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:16:57.608618    5135 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:57.608638    5135 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:57.608722    5135 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:16:57.608749    5135 main.go:141] libmachine: Decoding PEM data...
	I0706 11:16:57.608760    5135 main.go:141] libmachine: Parsing certificate...
	I0706 11:16:57.609235    5135 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:16:57.737035    5135 main.go:141] libmachine: Creating SSH key...
	I0706 11:16:57.776059    5135 main.go:141] libmachine: Creating Disk image...
	I0706 11:16:57.776065    5135 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:16:57.776205    5135 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2
	I0706 11:16:57.784480    5135 main.go:141] libmachine: STDOUT: 
	I0706 11:16:57.784493    5135 main.go:141] libmachine: STDERR: 
	I0706 11:16:57.784559    5135 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2 +20000M
	I0706 11:16:57.791970    5135 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:16:57.791994    5135 main.go:141] libmachine: STDERR: 
	I0706 11:16:57.792006    5135 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2
	I0706 11:16:57.792023    5135 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:16:57.792067    5135 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:5a:d2:dd:d9:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/enable-default-cni-264000/disk.qcow2
	I0706 11:16:57.793641    5135 main.go:141] libmachine: STDOUT: 
	I0706 11:16:57.793653    5135 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:16:57.793666    5135 client.go:171] LocalClient.Create took 185.234833ms
	I0706 11:16:59.795841    5135 start.go:128] duration metric: createHost completed in 2.244988541s
	I0706 11:16:59.795897    5135 start.go:83] releasing machines lock for "enable-default-cni-264000", held for 2.245495542s
	W0706 11:16:59.796191    5135 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:16:59.804519    5135 out.go:177] 
	W0706 11:16:59.808811    5135 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:16:59.808836    5135 out.go:239] * 
	* 
	W0706 11:16:59.810485    5135 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:16:59.820739    5135 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.887938208s)

                                                
                                                
-- stdout --
	* [flannel-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-264000 in cluster flannel-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:01.983754    5251 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:01.983870    5251 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:01.983872    5251 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:01.983875    5251 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:01.983949    5251 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:01.984966    5251 out.go:303] Setting JSON to false
	I0706 11:17:02.000291    5251 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2793,"bootTime":1688664628,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:17:02.000362    5251 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:17:02.004517    5251 out.go:177] * [flannel-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:17:02.008428    5251 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:17:02.012406    5251 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:17:02.008475    5251 notify.go:220] Checking for updates...
	I0706 11:17:02.015341    5251 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:17:02.018399    5251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:17:02.021413    5251 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:17:02.022737    5251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:17:02.025722    5251 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:17:02.025765    5251 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:17:02.030366    5251 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:17:02.035336    5251 start.go:297] selected driver: qemu2
	I0706 11:17:02.035341    5251 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:17:02.035347    5251 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:17:02.037237    5251 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:17:02.040414    5251 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:17:02.043524    5251 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:17:02.043558    5251 cni.go:84] Creating CNI manager for "flannel"
	I0706 11:17:02.043567    5251 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0706 11:17:02.043572    5251 start_flags.go:319] config:
	{Name:flannel-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0}
	I0706 11:17:02.047661    5251 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:02.054393    5251 out.go:177] * Starting control plane node flannel-264000 in cluster flannel-264000
	I0706 11:17:02.058336    5251 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:17:02.058361    5251 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:17:02.058379    5251 cache.go:57] Caching tarball of preloaded images
	I0706 11:17:02.058446    5251 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:17:02.058451    5251 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:17:02.058518    5251 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/flannel-264000/config.json ...
	I0706 11:17:02.058537    5251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/flannel-264000/config.json: {Name:mkded564a418d629a72e57eb4d98c13f5aaf1f57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:17:02.058753    5251 start.go:365] acquiring machines lock for flannel-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:02.058785    5251 start.go:369] acquired machines lock for "flannel-264000" in 27.333µs
	I0706 11:17:02.058797    5251 start.go:93] Provisioning new machine with config: &{Name:flannel-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:flannel-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:02.058830    5251 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:02.067318    5251 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:17:02.082718    5251 start.go:159] libmachine.API.Create for "flannel-264000" (driver="qemu2")
	I0706 11:17:02.082737    5251 client.go:168] LocalClient.Create starting
	I0706 11:17:02.082790    5251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:02.082815    5251 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:02.082826    5251 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:02.082853    5251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:02.082866    5251 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:02.082876    5251 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:02.083160    5251 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:02.194869    5251 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:02.316756    5251 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:02.316763    5251 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:02.316910    5251 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2
	I0706 11:17:02.325406    5251 main.go:141] libmachine: STDOUT: 
	I0706 11:17:02.325419    5251 main.go:141] libmachine: STDERR: 
	I0706 11:17:02.325465    5251 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2 +20000M
	I0706 11:17:02.332614    5251 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:02.332637    5251 main.go:141] libmachine: STDERR: 
	I0706 11:17:02.332662    5251 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2
	I0706 11:17:02.332668    5251 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:02.332719    5251 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e8:90:13:c5:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2
	I0706 11:17:02.334330    5251 main.go:141] libmachine: STDOUT: 
	I0706 11:17:02.334342    5251 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:02.334359    5251 client.go:171] LocalClient.Create took 251.620625ms
	I0706 11:17:04.336521    5251 start.go:128] duration metric: createHost completed in 2.277682292s
	I0706 11:17:04.336586    5251 start.go:83] releasing machines lock for "flannel-264000", held for 2.27779875s
	W0706 11:17:04.336657    5251 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:04.345147    5251 out.go:177] * Deleting "flannel-264000" in qemu2 ...
	W0706 11:17:04.371092    5251 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:04.371128    5251 start.go:687] Will try again in 5 seconds ...
	I0706 11:17:09.373395    5251 start.go:365] acquiring machines lock for flannel-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:09.373951    5251 start.go:369] acquired machines lock for "flannel-264000" in 442.833µs
	I0706 11:17:09.374057    5251 start.go:93] Provisioning new machine with config: &{Name:flannel-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:flannel-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:09.374377    5251 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:09.380039    5251 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:17:09.429335    5251 start.go:159] libmachine.API.Create for "flannel-264000" (driver="qemu2")
	I0706 11:17:09.429381    5251 client.go:168] LocalClient.Create starting
	I0706 11:17:09.429546    5251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:09.429588    5251 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:09.429608    5251 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:09.429710    5251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:09.429742    5251 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:09.429761    5251 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:09.430391    5251 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:09.683164    5251 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:09.785294    5251 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:09.785299    5251 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:09.785460    5251 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2
	I0706 11:17:09.794447    5251 main.go:141] libmachine: STDOUT: 
	I0706 11:17:09.794459    5251 main.go:141] libmachine: STDERR: 
	I0706 11:17:09.794524    5251 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2 +20000M
	I0706 11:17:09.801729    5251 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:09.801744    5251 main.go:141] libmachine: STDERR: 
	I0706 11:17:09.801759    5251 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2
	I0706 11:17:09.801769    5251 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:09.801812    5251 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f1:3a:f0:83:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/flannel-264000/disk.qcow2
	I0706 11:17:09.803361    5251 main.go:141] libmachine: STDOUT: 
	I0706 11:17:09.803374    5251 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:09.803386    5251 client.go:171] LocalClient.Create took 373.9995ms
	I0706 11:17:11.805560    5251 start.go:128] duration metric: createHost completed in 2.43115775s
	I0706 11:17:11.805644    5251 start.go:83] releasing machines lock for "flannel-264000", held for 2.431674875s
	W0706 11:17:11.806078    5251 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:11.815814    5251 out.go:177] 
	W0706 11:17:11.819830    5251 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:17:11.819867    5251 out.go:239] * 
	* 
	W0706 11:17:11.822604    5251 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:17:11.830688    5251 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.840239417s)

                                                
                                                
-- stdout --
	* [bridge-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-264000 in cluster bridge-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:14.161592    5369 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:14.161710    5369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:14.161713    5369 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:14.161715    5369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:14.161785    5369 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:14.162819    5369 out.go:303] Setting JSON to false
	I0706 11:17:14.178031    5369 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2806,"bootTime":1688664628,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:17:14.178099    5369 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:17:14.182980    5369 out.go:177] * [bridge-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:17:14.190849    5369 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:17:14.190917    5369 notify.go:220] Checking for updates...
	I0706 11:17:14.194931    5369 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:17:14.197842    5369 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:17:14.200887    5369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:17:14.203873    5369 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:17:14.206786    5369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:17:14.210176    5369 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:17:14.210211    5369 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:17:14.214909    5369 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:17:14.221867    5369 start.go:297] selected driver: qemu2
	I0706 11:17:14.221873    5369 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:17:14.221878    5369 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:17:14.223756    5369 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:17:14.226887    5369 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:17:14.229798    5369 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:17:14.229815    5369 cni.go:84] Creating CNI manager for "bridge"
	I0706 11:17:14.229821    5369 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:17:14.229827    5369 start_flags.go:319] config:
	{Name:bridge-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0706 11:17:14.233979    5369 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:14.240909    5369 out.go:177] * Starting control plane node bridge-264000 in cluster bridge-264000
	I0706 11:17:14.244878    5369 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:17:14.244908    5369 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:17:14.244920    5369 cache.go:57] Caching tarball of preloaded images
	I0706 11:17:14.244985    5369 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:17:14.244990    5369 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:17:14.245054    5369 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/bridge-264000/config.json ...
	I0706 11:17:14.245066    5369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/bridge-264000/config.json: {Name:mk81f0b10d41872b7a343a623e50e0e2aeed20ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:17:14.245273    5369 start.go:365] acquiring machines lock for bridge-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:14.245302    5369 start.go:369] acquired machines lock for "bridge-264000" in 23.75µs
	I0706 11:17:14.245313    5369 start.go:93] Provisioning new machine with config: &{Name:bridge-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:bridge-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:14.245340    5369 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:14.253935    5369 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:17:14.269629    5369 start.go:159] libmachine.API.Create for "bridge-264000" (driver="qemu2")
	I0706 11:17:14.269654    5369 client.go:168] LocalClient.Create starting
	I0706 11:17:14.269705    5369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:14.269726    5369 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:14.269738    5369 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:14.269781    5369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:14.269801    5369 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:14.269810    5369 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:14.270130    5369 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:14.387972    5369 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:14.626947    5369 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:14.626960    5369 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:14.627164    5369 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2
	I0706 11:17:14.636825    5369 main.go:141] libmachine: STDOUT: 
	I0706 11:17:14.636842    5369 main.go:141] libmachine: STDERR: 
	I0706 11:17:14.636895    5369 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2 +20000M
	I0706 11:17:14.644124    5369 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:14.644136    5369 main.go:141] libmachine: STDERR: 
	I0706 11:17:14.644147    5369 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2
	I0706 11:17:14.644152    5369 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:14.644189    5369 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:c2:a4:83:07:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2
	I0706 11:17:14.645667    5369 main.go:141] libmachine: STDOUT: 
	I0706 11:17:14.645681    5369 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:14.645699    5369 client.go:171] LocalClient.Create took 376.043666ms
	I0706 11:17:16.647850    5369 start.go:128] duration metric: createHost completed in 2.402501416s
	I0706 11:17:16.647912    5369 start.go:83] releasing machines lock for "bridge-264000", held for 2.402604834s
	W0706 11:17:16.647980    5369 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:16.658166    5369 out.go:177] * Deleting "bridge-264000" in qemu2 ...
	W0706 11:17:16.677447    5369 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:16.677472    5369 start.go:687] Will try again in 5 seconds ...
	I0706 11:17:21.679391    5369 start.go:365] acquiring machines lock for bridge-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:21.680001    5369 start.go:369] acquired machines lock for "bridge-264000" in 504.833µs
	I0706 11:17:21.680118    5369 start.go:93] Provisioning new machine with config: &{Name:bridge-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:bridge-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:21.680464    5369 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:21.690220    5369 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:17:21.739812    5369 start.go:159] libmachine.API.Create for "bridge-264000" (driver="qemu2")
	I0706 11:17:21.739862    5369 client.go:168] LocalClient.Create starting
	I0706 11:17:21.740000    5369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:21.740041    5369 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:21.740061    5369 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:21.740148    5369 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:21.740177    5369 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:21.740188    5369 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:21.740792    5369 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:21.867168    5369 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:21.913405    5369 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:21.913410    5369 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:21.913559    5369 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2
	I0706 11:17:21.922229    5369 main.go:141] libmachine: STDOUT: 
	I0706 11:17:21.922274    5369 main.go:141] libmachine: STDERR: 
	I0706 11:17:21.922344    5369 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2 +20000M
	I0706 11:17:21.929467    5369 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:21.929478    5369 main.go:141] libmachine: STDERR: 
	I0706 11:17:21.929491    5369 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2
	I0706 11:17:21.929496    5369 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:21.929537    5369 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:52:80:73:d7:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/bridge-264000/disk.qcow2
	I0706 11:17:21.931120    5369 main.go:141] libmachine: STDOUT: 
	I0706 11:17:21.931130    5369 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:21.931142    5369 client.go:171] LocalClient.Create took 191.274208ms
	I0706 11:17:23.933368    5369 start.go:128] duration metric: createHost completed in 2.252862959s
	I0706 11:17:23.933427    5369 start.go:83] releasing machines lock for "bridge-264000", held for 2.253408s
	W0706 11:17:23.933808    5369 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:23.943514    5369 out.go:177] 
	W0706 11:17:23.947562    5369 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:17:23.947587    5369 out.go:239] * 
	* 
	W0706 11:17:23.950324    5369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:17:23.960284    5369 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-264000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.671469375s)

                                                
                                                
-- stdout --
	* [kubenet-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-264000 in cluster kubenet-264000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:26.110301    5479 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:26.110442    5479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:26.110445    5479 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:26.110447    5479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:26.110515    5479 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:26.111502    5479 out.go:303] Setting JSON to false
	I0706 11:17:26.126681    5479 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2818,"bootTime":1688664628,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:17:26.126764    5479 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:17:26.130843    5479 out.go:177] * [kubenet-264000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:17:26.137859    5479 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:17:26.141783    5479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:17:26.137919    5479 notify.go:220] Checking for updates...
	I0706 11:17:26.144833    5479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:17:26.147733    5479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:17:26.150741    5479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:17:26.153820    5479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:17:26.157115    5479 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:17:26.157161    5479 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:17:26.161762    5479 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:17:26.168763    5479 start.go:297] selected driver: qemu2
	I0706 11:17:26.168771    5479 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:17:26.168786    5479 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:17:26.170729    5479 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:17:26.173764    5479 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:17:26.176842    5479 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:17:26.176859    5479 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0706 11:17:26.176863    5479 start_flags.go:319] config:
	{Name:kubenet-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0706 11:17:26.181153    5479 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:26.188756    5479 out.go:177] * Starting control plane node kubenet-264000 in cluster kubenet-264000
	I0706 11:17:26.192776    5479 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:17:26.192797    5479 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:17:26.192809    5479 cache.go:57] Caching tarball of preloaded images
	I0706 11:17:26.192866    5479 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:17:26.192871    5479 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:17:26.192924    5479 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kubenet-264000/config.json ...
	I0706 11:17:26.192936    5479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/kubenet-264000/config.json: {Name:mk059ff030ee1c3a5db1ff48f5352d7ea3aa9088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:17:26.193149    5479 start.go:365] acquiring machines lock for kubenet-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:26.193179    5479 start.go:369] acquired machines lock for "kubenet-264000" in 24.167µs
	I0706 11:17:26.193190    5479 start.go:93] Provisioning new machine with config: &{Name:kubenet-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:26.193216    5479 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:26.201680    5479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:17:26.217527    5479 start.go:159] libmachine.API.Create for "kubenet-264000" (driver="qemu2")
	I0706 11:17:26.217554    5479 client.go:168] LocalClient.Create starting
	I0706 11:17:26.217612    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:26.217631    5479 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:26.217642    5479 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:26.217680    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:26.217694    5479 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:26.217700    5479 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:26.217995    5479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:26.337734    5479 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:26.372358    5479 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:26.372363    5479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:26.372516    5479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2
	I0706 11:17:26.381209    5479 main.go:141] libmachine: STDOUT: 
	I0706 11:17:26.381225    5479 main.go:141] libmachine: STDERR: 
	I0706 11:17:26.381298    5479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2 +20000M
	I0706 11:17:26.388382    5479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:26.388394    5479 main.go:141] libmachine: STDERR: 
	I0706 11:17:26.388414    5479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2
	I0706 11:17:26.388419    5479 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:26.388458    5479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a0:cc:b6:82:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2
	I0706 11:17:26.390003    5479 main.go:141] libmachine: STDOUT: 
	I0706 11:17:26.390017    5479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:26.390034    5479 client.go:171] LocalClient.Create took 172.474416ms
	I0706 11:17:28.392211    5479 start.go:128] duration metric: createHost completed in 2.198985583s
	I0706 11:17:28.392272    5479 start.go:83] releasing machines lock for "kubenet-264000", held for 2.199092083s
	W0706 11:17:28.392334    5479 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:28.400628    5479 out.go:177] * Deleting "kubenet-264000" in qemu2 ...
	W0706 11:17:28.424706    5479 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:28.424731    5479 start.go:687] Will try again in 5 seconds ...
	I0706 11:17:33.426485    5479 start.go:365] acquiring machines lock for kubenet-264000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:33.426813    5479 start.go:369] acquired machines lock for "kubenet-264000" in 206.041µs
	I0706 11:17:33.426940    5479 start.go:93] Provisioning new machine with config: &{Name:kubenet-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:33.427199    5479 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:33.432830    5479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0706 11:17:33.473071    5479 start.go:159] libmachine.API.Create for "kubenet-264000" (driver="qemu2")
	I0706 11:17:33.473113    5479 client.go:168] LocalClient.Create starting
	I0706 11:17:33.473268    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:33.473335    5479 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:33.473361    5479 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:33.473456    5479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:33.473492    5479 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:33.473509    5479 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:33.474045    5479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:33.605435    5479 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:33.694931    5479 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:33.694940    5479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:33.695103    5479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2
	I0706 11:17:33.703943    5479 main.go:141] libmachine: STDOUT: 
	I0706 11:17:33.703956    5479 main.go:141] libmachine: STDERR: 
	I0706 11:17:33.704014    5479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2 +20000M
	I0706 11:17:33.711183    5479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:33.711194    5479 main.go:141] libmachine: STDERR: 
	I0706 11:17:33.711210    5479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2
	I0706 11:17:33.711215    5479 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:33.711256    5479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:f1:c1:a2:69:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/kubenet-264000/disk.qcow2
	I0706 11:17:33.712819    5479 main.go:141] libmachine: STDOUT: 
	I0706 11:17:33.712847    5479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:33.712859    5479 client.go:171] LocalClient.Create took 239.741541ms
	I0706 11:17:35.715046    5479 start.go:128] duration metric: createHost completed in 2.287784208s
	I0706 11:17:35.715101    5479 start.go:83] releasing machines lock for "kubenet-264000", held for 2.288230084s
	W0706 11:17:35.715507    5479 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:35.725189    5479 out.go:177] 
	W0706 11:17:35.729186    5479 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:17:35.729212    5479 out.go:239] * 
	* 
	W0706 11:17:35.731707    5479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:17:35.741173    5479 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.701225833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-789000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-789000 in cluster old-k8s-version-789000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-789000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:37.855192    5593 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:37.855315    5593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:37.855317    5593 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:37.855320    5593 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:37.855385    5593 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:37.856420    5593 out.go:303] Setting JSON to false
	I0706 11:17:37.871534    5593 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2829,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:17:37.871586    5593 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:17:37.876047    5593 out.go:177] * [old-k8s-version-789000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:17:37.883964    5593 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:17:37.884017    5593 notify.go:220] Checking for updates...
	I0706 11:17:37.887939    5593 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:17:37.890939    5593 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:17:37.893944    5593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:17:37.896933    5593 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:17:37.899937    5593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:17:37.903262    5593 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:17:37.903334    5593 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:17:37.906948    5593 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:17:37.912885    5593 start.go:297] selected driver: qemu2
	I0706 11:17:37.912889    5593 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:17:37.912895    5593 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:17:37.914829    5593 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:17:37.917959    5593 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:17:37.921047    5593 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:17:37.921071    5593 cni.go:84] Creating CNI manager for ""
	I0706 11:17:37.921080    5593 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 11:17:37.921084    5593 start_flags.go:319] config:
	{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0}
	I0706 11:17:37.925087    5593 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:37.931911    5593 out.go:177] * Starting control plane node old-k8s-version-789000 in cluster old-k8s-version-789000
	I0706 11:17:37.935989    5593 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 11:17:37.936013    5593 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 11:17:37.936024    5593 cache.go:57] Caching tarball of preloaded images
	I0706 11:17:37.936088    5593 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:17:37.936093    5593 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0706 11:17:37.936158    5593 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/old-k8s-version-789000/config.json ...
	I0706 11:17:37.936171    5593 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/old-k8s-version-789000/config.json: {Name:mk83568093d372814fb07d57ea474c0b265320ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:17:37.936385    5593 start.go:365] acquiring machines lock for old-k8s-version-789000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:37.936416    5593 start.go:369] acquired machines lock for "old-k8s-version-789000" in 23.959µs
	I0706 11:17:37.936429    5593 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:37.936463    5593 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:37.944961    5593 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:17:37.960736    5593 start.go:159] libmachine.API.Create for "old-k8s-version-789000" (driver="qemu2")
	I0706 11:17:37.960755    5593 client.go:168] LocalClient.Create starting
	I0706 11:17:37.960848    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:37.960891    5593 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:37.960910    5593 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:37.960960    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:37.960981    5593 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:37.960991    5593 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:37.961351    5593 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:38.079034    5593 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:38.183991    5593 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:38.183997    5593 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:38.184144    5593 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:38.192639    5593 main.go:141] libmachine: STDOUT: 
	I0706 11:17:38.192654    5593 main.go:141] libmachine: STDERR: 
	I0706 11:17:38.192715    5593 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2 +20000M
	I0706 11:17:38.199843    5593 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:38.199856    5593 main.go:141] libmachine: STDERR: 
	I0706 11:17:38.199869    5593 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:38.199878    5593 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:38.199913    5593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:8e:c4:6e:af:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:38.201430    5593 main.go:141] libmachine: STDOUT: 
	I0706 11:17:38.201444    5593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:38.201460    5593 client.go:171] LocalClient.Create took 240.701375ms
	I0706 11:17:40.203645    5593 start.go:128] duration metric: createHost completed in 2.267170334s
	I0706 11:17:40.203715    5593 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 2.267297375s
	W0706 11:17:40.203779    5593 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:40.210559    5593 out.go:177] * Deleting "old-k8s-version-789000" in qemu2 ...
	W0706 11:17:40.228602    5593 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:40.228629    5593 start.go:687] Will try again in 5 seconds ...
	I0706 11:17:45.230840    5593 start.go:365] acquiring machines lock for old-k8s-version-789000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:45.231370    5593 start.go:369] acquired machines lock for "old-k8s-version-789000" in 420.125µs
	I0706 11:17:45.231538    5593 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:45.231846    5593 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:45.242502    5593 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:17:45.290086    5593 start.go:159] libmachine.API.Create for "old-k8s-version-789000" (driver="qemu2")
	I0706 11:17:45.290139    5593 client.go:168] LocalClient.Create starting
	I0706 11:17:45.290280    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:45.290329    5593 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:45.290350    5593 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:45.290429    5593 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:45.290456    5593 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:45.290471    5593 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:45.290990    5593 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:45.420918    5593 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:45.471037    5593 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:45.471042    5593 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:45.471189    5593 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:45.479725    5593 main.go:141] libmachine: STDOUT: 
	I0706 11:17:45.479739    5593 main.go:141] libmachine: STDERR: 
	I0706 11:17:45.479783    5593 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2 +20000M
	I0706 11:17:45.486841    5593 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:45.486855    5593 main.go:141] libmachine: STDERR: 
	I0706 11:17:45.486875    5593 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:45.486883    5593 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:45.486915    5593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:91:3e:08:51:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:45.488413    5593 main.go:141] libmachine: STDOUT: 
	I0706 11:17:45.488425    5593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:45.488436    5593 client.go:171] LocalClient.Create took 198.293458ms
	I0706 11:17:47.490586    5593 start.go:128] duration metric: createHost completed in 2.25872425s
	I0706 11:17:47.490650    5593 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 2.259248708s
	W0706 11:17:47.491122    5593 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:47.500700    5593 out.go:177] 
	W0706 11:17:47.504623    5593 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:17:47.504646    5593 out.go:239] * 
	* 
	W0706 11:17:47.507434    5593 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:17:47.515686    5593 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (64.866916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-789000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-789000 create -f testdata/busybox.yaml: exit status 1 (28.592ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-789000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (29.033292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (28.153959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-789000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-789000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-789000 describe deploy/metrics-server -n kube-system: exit status 1 (25.458833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-789000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (28.954084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.175543708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-789000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-789000 in cluster old-k8s-version-789000
	* Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:47.984572    5625 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:47.984671    5625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:47.984674    5625 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:47.984677    5625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:47.984750    5625 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:47.985704    5625 out.go:303] Setting JSON to false
	I0706 11:17:48.000870    5625 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2839,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:17:48.000941    5625 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:17:48.004568    5625 out.go:177] * [old-k8s-version-789000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:17:48.011588    5625 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:17:48.011650    5625 notify.go:220] Checking for updates...
	I0706 11:17:48.015532    5625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:17:48.018578    5625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:17:48.021548    5625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:17:48.024522    5625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:17:48.027535    5625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:17:48.031352    5625 config.go:182] Loaded profile config "old-k8s-version-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0706 11:17:48.032979    5625 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0706 11:17:48.035503    5625 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:17:48.039522    5625 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:17:48.044494    5625 start.go:297] selected driver: qemu2
	I0706 11:17:48.044499    5625 start.go:944] validating driver "qemu2" against &{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:17:48.044548    5625 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:17:48.046549    5625 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:17:48.046570    5625 cni.go:84] Creating CNI manager for ""
	I0706 11:17:48.046578    5625 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 11:17:48.046581    5625 start_flags.go:319] config:
	{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-789000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:17:48.050468    5625 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:48.057518    5625 out.go:177] * Starting control plane node old-k8s-version-789000 in cluster old-k8s-version-789000
	I0706 11:17:48.061569    5625 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 11:17:48.061595    5625 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 11:17:48.061610    5625 cache.go:57] Caching tarball of preloaded images
	I0706 11:17:48.061665    5625 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:17:48.061670    5625 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0706 11:17:48.061736    5625 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/old-k8s-version-789000/config.json ...
	I0706 11:17:48.062096    5625 start.go:365] acquiring machines lock for old-k8s-version-789000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:48.062120    5625 start.go:369] acquired machines lock for "old-k8s-version-789000" in 18.417µs
	I0706 11:17:48.062129    5625 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:17:48.062135    5625 fix.go:54] fixHost starting: 
	I0706 11:17:48.062247    5625 fix.go:102] recreateIfNeeded on old-k8s-version-789000: state=Stopped err=<nil>
	W0706 11:17:48.062257    5625 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:17:48.066501    5625 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	I0706 11:17:48.074561    5625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:91:3e:08:51:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:48.076344    5625 main.go:141] libmachine: STDOUT: 
	I0706 11:17:48.076363    5625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:48.076400    5625 fix.go:56] fixHost completed within 14.266666ms
	I0706 11:17:48.076404    5625 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 14.2805ms
	W0706 11:17:48.076412    5625 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:17:48.076459    5625 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:48.076463    5625 start.go:687] Will try again in 5 seconds ...
	I0706 11:17:53.078692    5625 start.go:365] acquiring machines lock for old-k8s-version-789000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:53.079215    5625 start.go:369] acquired machines lock for "old-k8s-version-789000" in 415µs
	I0706 11:17:53.079375    5625 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:17:53.079394    5625 fix.go:54] fixHost starting: 
	I0706 11:17:53.080188    5625 fix.go:102] recreateIfNeeded on old-k8s-version-789000: state=Stopped err=<nil>
	W0706 11:17:53.080215    5625 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:17:53.084729    5625 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	I0706 11:17:53.091906    5625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:91:3e:08:51:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I0706 11:17:53.101178    5625 main.go:141] libmachine: STDOUT: 
	I0706 11:17:53.101224    5625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:53.101301    5625 fix.go:56] fixHost completed within 21.908208ms
	I0706 11:17:53.101314    5625 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 22.07825ms
	W0706 11:17:53.101514    5625 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:53.108702    5625 out.go:177] 
	W0706 11:17:53.112761    5625 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:17:53.112803    5625 out.go:239] * 
	* 
	W0706 11:17:53.115013    5625 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:17:53.120656    5625 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (67.591542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-789000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (33.275166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-789000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.699167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (29.054875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-789000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-789000 "sudo crictl images -o json": exit status 89 (38.80975ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-789000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-789000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-789000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (28.832458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-789000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-789000 --alsologtostderr -v=1: exit status 89 (40.259459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-789000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:53.387000    5644 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:53.387396    5644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:53.387399    5644 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:53.387402    5644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:53.387498    5644 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:53.387705    5644 out.go:303] Setting JSON to false
	I0706 11:17:53.387713    5644 mustload.go:65] Loading cluster: old-k8s-version-789000
	I0706 11:17:53.387891    5644 config.go:182] Loaded profile config "old-k8s-version-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0706 11:17:53.391228    5644 out.go:177] * The control plane node must be running for this command
	I0706 11:17:53.395337    5644 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-789000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-789000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (29.0115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (28.80925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-658000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-658000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.705763s)

                                                
                                                
-- stdout --
	* [no-preload-658000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-658000 in cluster no-preload-658000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-658000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:17:53.848331    5667 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:17:53.848454    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:53.848457    5667 out.go:309] Setting ErrFile to fd 2...
	I0706 11:17:53.848460    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:17:53.848526    5667 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:17:53.849538    5667 out.go:303] Setting JSON to false
	I0706 11:17:53.865019    5667 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2845,"bootTime":1688664628,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:17:53.865090    5667 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:17:53.873864    5667 out.go:177] * [no-preload-658000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:17:53.877927    5667 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:17:53.878001    5667 notify.go:220] Checking for updates...
	I0706 11:17:53.884828    5667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:17:53.887893    5667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:17:53.890775    5667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:17:53.893842    5667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:17:53.896855    5667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:17:53.900181    5667 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:17:53.900246    5667 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:17:53.904831    5667 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:17:53.911867    5667 start.go:297] selected driver: qemu2
	I0706 11:17:53.911873    5667 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:17:53.911880    5667 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:17:53.913843    5667 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:17:53.916840    5667 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:17:53.919963    5667 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:17:53.919986    5667 cni.go:84] Creating CNI manager for ""
	I0706 11:17:53.919994    5667 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:17:53.920005    5667 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:17:53.920011    5667 start_flags.go:319] config:
	{Name:no-preload-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0}
	I0706 11:17:53.924144    5667 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.930922    5667 out.go:177] * Starting control plane node no-preload-658000 in cluster no-preload-658000
	I0706 11:17:53.934874    5667 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:17:53.934975    5667 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/no-preload-658000/config.json ...
	I0706 11:17:53.934992    5667 cache.go:107] acquiring lock: {Name:mkd11ccb18e4f1534fd17ab02fa53f43012548a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.934999    5667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/no-preload-658000/config.json: {Name:mke19954129e3d6f1abce7cebfa7fb66c1b67c33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:17:53.934990    5667 cache.go:107] acquiring lock: {Name:mke32f2365c6a76b179b139bffb8dbe1b535eb28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935015    5667 cache.go:107] acquiring lock: {Name:mk5f37ff2afc79e57d5a59f50c259504dc4a9f7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935028    5667 cache.go:107] acquiring lock: {Name:mk235a9aa637758531dee8ff57aac3236f70a6e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935033    5667 cache.go:107] acquiring lock: {Name:mk3bd48bbfea77c421cf4385c27668ba48a481bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935028    5667 cache.go:107] acquiring lock: {Name:mkd09f1ce5a24fcf5928b6c4bc61e32b58362160 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935040    5667 cache.go:107] acquiring lock: {Name:mkb79a590c0e570702fa68b8fd9be47197913395 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935185    5667 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0706 11:17:53.935196    5667 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0706 11:17:53.935225    5667 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0706 11:17:53.935296    5667 cache.go:107] acquiring lock: {Name:mk53b878b9fad110557dacf3a6c102ce905fc596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:17:53.935308    5667 start.go:365] acquiring machines lock for no-preload-658000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:17:53.935325    5667 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0706 11:17:53.935365    5667 start.go:369] acquired machines lock for "no-preload-658000" in 44.833µs
	I0706 11:17:53.935355    5667 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 366µs
	I0706 11:17:53.935381    5667 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0706 11:17:53.935403    5667 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0706 11:17:53.935380    5667 start.go:93] Provisioning new machine with config: &{Name:no-preload-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:17:53.935425    5667 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0706 11:17:53.935403    5667 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0706 11:17:53.935426    5667 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:17:53.935685    5667 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0706 11:17:53.939891    5667 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:17:53.948879    5667 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0706 11:17:53.948917    5667 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0706 11:17:53.949513    5667 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0706 11:17:53.949583    5667 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0706 11:17:53.949634    5667 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0706 11:17:53.949654    5667 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0706 11:17:53.949685    5667 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0706 11:17:53.956021    5667 start.go:159] libmachine.API.Create for "no-preload-658000" (driver="qemu2")
	I0706 11:17:53.956044    5667 client.go:168] LocalClient.Create starting
	I0706 11:17:53.956125    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:17:53.956153    5667 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:53.956161    5667 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:53.956208    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:17:53.956223    5667 main.go:141] libmachine: Decoding PEM data...
	I0706 11:17:53.956238    5667 main.go:141] libmachine: Parsing certificate...
	I0706 11:17:53.956569    5667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:17:54.089796    5667 main.go:141] libmachine: Creating SSH key...
	I0706 11:17:54.135473    5667 main.go:141] libmachine: Creating Disk image...
	I0706 11:17:54.135482    5667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:17:54.135670    5667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:17:54.144113    5667 main.go:141] libmachine: STDOUT: 
	I0706 11:17:54.144129    5667 main.go:141] libmachine: STDERR: 
	I0706 11:17:54.144192    5667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2 +20000M
	I0706 11:17:54.151654    5667 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:17:54.151670    5667 main.go:141] libmachine: STDERR: 
	I0706 11:17:54.151688    5667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:17:54.151694    5667 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:17:54.151743    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:8a:48:a7:4c:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:17:54.153638    5667 main.go:141] libmachine: STDOUT: 
	I0706 11:17:54.153653    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:17:54.153672    5667 client.go:171] LocalClient.Create took 197.625209ms
	I0706 11:17:55.154434    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0706 11:17:55.155012    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3
	I0706 11:17:55.202217    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3
	I0706 11:17:55.316890    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0706 11:17:55.316905    5667 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.381640916s
	I0706 11:17:55.316912    5667 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0706 11:17:55.342293    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0706 11:17:55.427019    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0706 11:17:55.599444    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3
	I0706 11:17:55.807524    5667 cache.go:162] opening:  /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0706 11:17:56.153991    5667 start.go:128] duration metric: createHost completed in 2.218515625s
	I0706 11:17:56.154049    5667 start.go:83] releasing machines lock for "no-preload-658000", held for 2.218682s
	W0706 11:17:56.154103    5667 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:56.164574    5667 out.go:177] * Deleting "no-preload-658000" in qemu2 ...
	W0706 11:17:56.185609    5667 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:17:56.185637    5667 start.go:687] Will try again in 5 seconds ...
	I0706 11:17:57.009291    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0706 11:17:57.009332    5667 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.074321708s
	I0706 11:17:57.009360    5667 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0706 11:17:57.697832    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0706 11:17:57.697882    5667 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 3.7629105s
	I0706 11:17:57.697910    5667 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0706 11:17:57.885644    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0706 11:17:57.885703    5667 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 3.950670292s
	I0706 11:17:57.885754    5667 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0706 11:17:59.318117    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0706 11:17:59.318177    5667 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 5.383152875s
	I0706 11:17:59.318205    5667 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0706 11:17:59.612991    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0706 11:17:59.613028    5667 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 5.678015s
	I0706 11:17:59.613044    5667 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0706 11:18:01.185832    5667 start.go:365] acquiring machines lock for no-preload-658000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:01.186357    5667 start.go:369] acquired machines lock for "no-preload-658000" in 428.458µs
	I0706 11:18:01.186514    5667 start.go:93] Provisioning new machine with config: &{Name:no-preload-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:01.186831    5667 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:01.196401    5667 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:01.244813    5667 start.go:159] libmachine.API.Create for "no-preload-658000" (driver="qemu2")
	I0706 11:18:01.244856    5667 client.go:168] LocalClient.Create starting
	I0706 11:18:01.245003    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:01.245050    5667 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:01.245067    5667 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:01.245140    5667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:01.245167    5667 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:01.245185    5667 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:01.245667    5667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:01.372218    5667 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:01.469175    5667 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:01.469181    5667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:01.469333    5667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:18:01.478026    5667 main.go:141] libmachine: STDOUT: 
	I0706 11:18:01.478040    5667 main.go:141] libmachine: STDERR: 
	I0706 11:18:01.478101    5667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2 +20000M
	I0706 11:18:01.485425    5667 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:01.485440    5667 main.go:141] libmachine: STDERR: 
	I0706 11:18:01.485451    5667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:18:01.485463    5667 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:01.485507    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:62:ab:87:49:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:18:01.487182    5667 main.go:141] libmachine: STDOUT: 
	I0706 11:18:01.487196    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:01.487207    5667 client.go:171] LocalClient.Create took 242.347833ms
	I0706 11:18:03.024856    5667 cache.go:157] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0706 11:18:03.024934    5667 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 9.089952125s
	I0706 11:18:03.024979    5667 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0706 11:18:03.025037    5667 cache.go:87] Successfully saved all images to host disk.
	I0706 11:18:03.489392    5667 start.go:128] duration metric: createHost completed in 2.302533333s
	I0706 11:18:03.489485    5667 start.go:83] releasing machines lock for "no-preload-658000", held for 2.303107s
	W0706 11:18:03.489796    5667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-658000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-658000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:03.500225    5667 out.go:177] 
	W0706 11:18:03.504127    5667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:03.504149    5667 out.go:239] * 
	* 
	W0706 11:18:03.506546    5667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:03.515193    5667 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-658000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (70.565125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-658000 create -f testdata/busybox.yaml: exit status 1 (29.031167ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-658000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.320375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.637542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-658000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-658000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-658000 describe deploy/metrics-server -n kube-system: exit status 1 (25.723792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-658000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-658000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.898042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-658000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-658000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.177142583s)

                                                
                                                
-- stdout --
	* [no-preload-658000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-658000 in cluster no-preload-658000
	* Restarting existing qemu2 VM for "no-preload-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-658000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:03.975776    5800 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:03.975888    5800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:03.975891    5800 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:03.975894    5800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:03.975959    5800 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:03.976912    5800 out.go:303] Setting JSON to false
	I0706 11:18:03.992313    5800 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2855,"bootTime":1688664628,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:03.992396    5800 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:03.996951    5800 out.go:177] * [no-preload-658000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:04.003986    5800 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:04.004056    5800 notify.go:220] Checking for updates...
	I0706 11:18:04.009972    5800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:04.012972    5800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:04.015897    5800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:04.018946    5800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:04.021982    5800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:04.025235    5800 config.go:182] Loaded profile config "no-preload-658000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:04.025479    5800 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:04.029925    5800 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:18:04.036921    5800 start.go:297] selected driver: qemu2
	I0706 11:18:04.036929    5800 start.go:944] validating driver "qemu2" against &{Name:no-preload-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:04.037002    5800 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:04.038957    5800 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:18:04.038980    5800 cni.go:84] Creating CNI manager for ""
	I0706 11:18:04.038986    5800 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:04.038992    5800 start_flags.go:319] config:
	{Name:no-preload-658000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-658000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:04.042759    5800 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.049934    5800 out.go:177] * Starting control plane node no-preload-658000 in cluster no-preload-658000
	I0706 11:18:04.053931    5800 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:04.054005    5800 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/no-preload-658000/config.json ...
	I0706 11:18:04.054025    5800 cache.go:107] acquiring lock: {Name:mke32f2365c6a76b179b139bffb8dbe1b535eb28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054030    5800 cache.go:107] acquiring lock: {Name:mkd11ccb18e4f1534fd17ab02fa53f43012548a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054045    5800 cache.go:107] acquiring lock: {Name:mk3bd48bbfea77c421cf4385c27668ba48a481bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054088    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0706 11:18:04.054095    5800 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.833µs
	I0706 11:18:04.054100    5800 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0706 11:18:04.054101    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0706 11:18:04.054109    5800 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 83.792µs
	I0706 11:18:04.054126    5800 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0706 11:18:04.054112    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0706 11:18:04.054131    5800 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 118.542µs
	I0706 11:18:04.054134    5800 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0706 11:18:04.054144    5800 cache.go:107] acquiring lock: {Name:mk235a9aa637758531dee8ff57aac3236f70a6e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054123    5800 cache.go:107] acquiring lock: {Name:mk5f37ff2afc79e57d5a59f50c259504dc4a9f7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054159    5800 cache.go:107] acquiring lock: {Name:mk53b878b9fad110557dacf3a6c102ce905fc596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054180    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0706 11:18:04.054184    5800 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 42µs
	I0706 11:18:04.054187    5800 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0706 11:18:04.054191    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0706 11:18:04.054190    5800 cache.go:107] acquiring lock: {Name:mkb79a590c0e570702fa68b8fd9be47197913395 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054195    5800 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 77.959µs
	I0706 11:18:04.054200    5800 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0706 11:18:04.054200    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0706 11:18:04.054204    5800 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 96.792µs
	I0706 11:18:04.054208    5800 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0706 11:18:04.054225    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0706 11:18:04.054228    5800 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 38.625µs
	I0706 11:18:04.054232    5800 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0706 11:18:04.054247    5800 cache.go:107] acquiring lock: {Name:mkd09f1ce5a24fcf5928b6c4bc61e32b58362160 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:04.054290    5800 cache.go:115] /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0706 11:18:04.054294    5800 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 88.708µs
	I0706 11:18:04.054301    5800 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0706 11:18:04.054305    5800 cache.go:87] Successfully saved all images to host disk.
	I0706 11:18:04.054365    5800 start.go:365] acquiring machines lock for no-preload-658000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:04.054394    5800 start.go:369] acquired machines lock for "no-preload-658000" in 22.833µs
	I0706 11:18:04.054403    5800 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:04.054408    5800 fix.go:54] fixHost starting: 
	I0706 11:18:04.054524    5800 fix.go:102] recreateIfNeeded on no-preload-658000: state=Stopped err=<nil>
	W0706 11:18:04.054533    5800 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:04.062972    5800 out.go:177] * Restarting existing qemu2 VM for "no-preload-658000" ...
	I0706 11:18:04.066946    5800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:62:ab:87:49:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:18:04.068793    5800 main.go:141] libmachine: STDOUT: 
	I0706 11:18:04.068808    5800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:04.068842    5800 fix.go:56] fixHost completed within 14.43175ms
	I0706 11:18:04.068847    5800 start.go:83] releasing machines lock for "no-preload-658000", held for 14.449333ms
	W0706 11:18:04.068854    5800 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:04.068899    5800 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:04.068903    5800 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:09.070986    5800 start.go:365] acquiring machines lock for no-preload-658000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:09.071344    5800 start.go:369] acquired machines lock for "no-preload-658000" in 293.667µs
	I0706 11:18:09.071483    5800 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:09.071500    5800 fix.go:54] fixHost starting: 
	I0706 11:18:09.072185    5800 fix.go:102] recreateIfNeeded on no-preload-658000: state=Stopped err=<nil>
	W0706 11:18:09.072208    5800 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:09.075612    5800 out.go:177] * Restarting existing qemu2 VM for "no-preload-658000" ...
	I0706 11:18:09.082645    5800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:62:ab:87:49:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/no-preload-658000/disk.qcow2
	I0706 11:18:09.090907    5800 main.go:141] libmachine: STDOUT: 
	I0706 11:18:09.090973    5800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:09.091032    5800 fix.go:56] fixHost completed within 19.532708ms
	I0706 11:18:09.091050    5800 start.go:83] releasing machines lock for "no-preload-658000", held for 19.687209ms
	W0706 11:18:09.091192    5800 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-658000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-658000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:09.098477    5800 out.go:177] 
	W0706 11:18:09.101525    5800 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:09.101554    5800 out.go:239] * 
	* 
	W0706 11:18:09.104132    5800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:09.113497    5800 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-658000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (66.466167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-658000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (31.972667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-658000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-658000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-658000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.271667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-658000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-658000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.739292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-658000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-658000 "sudo crictl images -o json": exit status 89 (37.438125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-658000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-658000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-658000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.144292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-658000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-658000 --alsologtostderr -v=1: exit status 89 (40.289959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-658000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:09.374742    5819 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:09.374870    5819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:09.374873    5819 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:09.374876    5819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:09.374955    5819 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:09.375162    5819 out.go:303] Setting JSON to false
	I0706 11:18:09.375171    5819 mustload.go:65] Loading cluster: no-preload-658000
	I0706 11:18:09.375330    5819 config.go:182] Loaded profile config "no-preload-658000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:09.379746    5819 out.go:177] * The control plane node must be running for this command
	I0706 11:18:09.383730    5819 out.go:177]   To start a cluster, run: "minikube start -p no-preload-658000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-658000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.76825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (28.478375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-658000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-711000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-711000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.76182325s)

                                                
                                                
-- stdout --
	* [embed-certs-711000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-711000 in cluster embed-certs-711000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-711000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:09.837768    5842 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:09.837881    5842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:09.837883    5842 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:09.837885    5842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:09.837957    5842 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:09.838926    5842 out.go:303] Setting JSON to false
	I0706 11:18:09.854058    5842 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2861,"bootTime":1688664628,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:09.854126    5842 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:09.859225    5842 out.go:177] * [embed-certs-711000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:09.866179    5842 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:09.866252    5842 notify.go:220] Checking for updates...
	I0706 11:18:09.870134    5842 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:09.874116    5842 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:09.877132    5842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:09.880194    5842 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:09.883114    5842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:09.886516    5842 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:09.886559    5842 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:09.891153    5842 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:18:09.898164    5842 start.go:297] selected driver: qemu2
	I0706 11:18:09.898172    5842 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:18:09.898180    5842 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:09.900135    5842 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:18:09.903114    5842 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:18:09.906236    5842 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:18:09.906257    5842 cni.go:84] Creating CNI manager for ""
	I0706 11:18:09.906264    5842 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:09.906272    5842 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:18:09.906278    5842 start_flags.go:319] config:
	{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0}
	I0706 11:18:09.910475    5842 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:09.917194    5842 out.go:177] * Starting control plane node embed-certs-711000 in cluster embed-certs-711000
	I0706 11:18:09.921203    5842 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:09.921225    5842 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:09.921236    5842 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:09.921300    5842 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:09.921304    5842 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:09.921363    5842 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/embed-certs-711000/config.json ...
	I0706 11:18:09.921375    5842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/embed-certs-711000/config.json: {Name:mka900f3fb20077931e6004b9b75f53a9ca0c9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:18:09.921578    5842 start.go:365] acquiring machines lock for embed-certs-711000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:09.921607    5842 start.go:369] acquired machines lock for "embed-certs-711000" in 23.167µs
	I0706 11:18:09.921618    5842 start.go:93] Provisioning new machine with config: &{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:09.921644    5842 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:09.930150    5842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:09.946293    5842 start.go:159] libmachine.API.Create for "embed-certs-711000" (driver="qemu2")
	I0706 11:18:09.946318    5842 client.go:168] LocalClient.Create starting
	I0706 11:18:09.946371    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:09.946392    5842 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:09.946400    5842 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:09.946435    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:09.946450    5842 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:09.946469    5842 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:09.946760    5842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:10.059233    5842 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:10.096980    5842 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:10.096987    5842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:10.097167    5842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:10.105866    5842 main.go:141] libmachine: STDOUT: 
	I0706 11:18:10.105883    5842 main.go:141] libmachine: STDERR: 
	I0706 11:18:10.105935    5842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2 +20000M
	I0706 11:18:10.113076    5842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:10.113089    5842 main.go:141] libmachine: STDERR: 
	I0706 11:18:10.113106    5842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:10.113113    5842 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:10.113153    5842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b8:46:21:fd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:10.114694    5842 main.go:141] libmachine: STDOUT: 
	I0706 11:18:10.114707    5842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:10.114726    5842 client.go:171] LocalClient.Create took 168.405542ms
	I0706 11:18:12.116923    5842 start.go:128] duration metric: createHost completed in 2.195254417s
	I0706 11:18:12.117017    5842 start.go:83] releasing machines lock for "embed-certs-711000", held for 2.195405042s
	W0706 11:18:12.117153    5842 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:12.125717    5842 out.go:177] * Deleting "embed-certs-711000" in qemu2 ...
	W0706 11:18:12.149234    5842 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:12.149262    5842 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:17.151507    5842 start.go:365] acquiring machines lock for embed-certs-711000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:17.151911    5842 start.go:369] acquired machines lock for "embed-certs-711000" in 308.667µs
	I0706 11:18:17.152059    5842 start.go:93] Provisioning new machine with config: &{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:17.152389    5842 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:17.161084    5842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:17.208686    5842 start.go:159] libmachine.API.Create for "embed-certs-711000" (driver="qemu2")
	I0706 11:18:17.208728    5842 client.go:168] LocalClient.Create starting
	I0706 11:18:17.208867    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:17.208926    5842 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:17.208951    5842 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:17.209063    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:17.209100    5842 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:17.209115    5842 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:17.209780    5842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:17.334857    5842 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:17.516089    5842 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:17.516100    5842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:17.516261    5842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:17.524940    5842 main.go:141] libmachine: STDOUT: 
	I0706 11:18:17.524963    5842 main.go:141] libmachine: STDERR: 
	I0706 11:18:17.525009    5842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2 +20000M
	I0706 11:18:17.532211    5842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:17.532222    5842 main.go:141] libmachine: STDERR: 
	I0706 11:18:17.532232    5842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:17.532240    5842 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:17.532296    5842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:66:c1:a4:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:17.533808    5842 main.go:141] libmachine: STDOUT: 
	I0706 11:18:17.533824    5842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:17.533842    5842 client.go:171] LocalClient.Create took 325.1105ms
	I0706 11:18:19.535985    5842 start.go:128] duration metric: createHost completed in 2.383581s
	I0706 11:18:19.536051    5842 start.go:83] releasing machines lock for "embed-certs-711000", held for 2.384125334s
	W0706 11:18:19.536491    5842 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:19.544103    5842 out.go:177] 
	W0706 11:18:19.548206    5842 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:19.548282    5842 out.go:239] * 
	* 
	W0706 11:18:19.551119    5842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:19.560126    5842 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-711000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (67.23075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe start -p stopped-upgrade-712000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe start -p stopped-upgrade-712000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe: permission denied (1.749292ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe start -p stopped-upgrade-712000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe start -p stopped-upgrade-712000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe: permission denied (5.634542ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe start -p stopped-upgrade-712000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe start -p stopped-upgrade-712000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe: permission denied (6.500125ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.1723735906.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-712000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-712000: exit status 85 (115.41175ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-264000 sudo                                  | bridge-264000          | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-264000 sudo                                  | bridge-264000          | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-264000 sudo                                  | bridge-264000          | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-264000 sudo find                             | bridge-264000          | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-264000 sudo crio                             | bridge-264000          | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-264000                                       | bridge-264000          | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	| start   | -p kubenet-264000                                      | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | --memory=3072                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                               |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo crictl                          | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo crictl                          | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | ps --all                                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo find                            | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo ip a s                          | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	| ssh     | -p kubenet-264000 sudo ip r s                          | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | iptables -t nat -L -n -v                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl status kubelet --all                         |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl cat kubelet                                  |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo docker                          | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo cat                             | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo                                 | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo find                            | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-264000 sudo crio                            | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-264000                                      | kubenet-264000         | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	| start   | -p old-k8s-version-789000                              | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-789000        | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-789000                              | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-789000             | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-789000                              | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-789000 sudo                         | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-789000                              | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-789000                              | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	| delete  | -p old-k8s-version-789000                              | old-k8s-version-789000 | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT | 06 Jul 23 11:17 PDT |
	| start   | -p no-preload-658000                                   | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:17 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=qemu2                         |                        |         |         |                     |                     |
	|         |  --kubernetes-version=v1.27.3                          |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-658000             | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT | 06 Jul 23 11:18 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-658000                                   | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT | 06 Jul 23 11:18 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658000                  | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT | 06 Jul 23 11:18 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-658000                                   | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=qemu2                         |                        |         |         |                     |                     |
	|         |  --kubernetes-version=v1.27.3                          |                        |         |         |                     |                     |
	| ssh     | -p no-preload-658000 sudo                              | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-658000                                   | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT |                     |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-658000                                   | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT | 06 Jul 23 11:18 PDT |
	| delete  | -p no-preload-658000                                   | no-preload-658000      | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT | 06 Jul 23 11:18 PDT |
	| start   | -p embed-certs-711000                                  | embed-certs-711000     | jenkins | v1.30.1 | 06 Jul 23 11:18 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=qemu2                           |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 11:18:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 11:18:09.837768    5842 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:09.837881    5842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:09.837883    5842 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:09.837885    5842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:09.837957    5842 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:09.838926    5842 out.go:303] Setting JSON to false
	I0706 11:18:09.854058    5842 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2861,"bootTime":1688664628,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:09.854126    5842 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:09.859225    5842 out.go:177] * [embed-certs-711000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:09.866179    5842 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:09.866252    5842 notify.go:220] Checking for updates...
	I0706 11:18:09.870134    5842 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:09.874116    5842 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:09.877132    5842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:09.880194    5842 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:09.883114    5842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:09.886516    5842 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:09.886559    5842 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:09.891153    5842 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:18:09.898164    5842 start.go:297] selected driver: qemu2
	I0706 11:18:09.898172    5842 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:18:09.898180    5842 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:09.900135    5842 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:18:09.903114    5842 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:18:09.906236    5842 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:18:09.906257    5842 cni.go:84] Creating CNI manager for ""
	I0706 11:18:09.906264    5842 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:09.906272    5842 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:18:09.906278    5842 start_flags.go:319] config:
	{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0}
	I0706 11:18:09.910475    5842 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:09.917194    5842 out.go:177] * Starting control plane node embed-certs-711000 in cluster embed-certs-711000
	I0706 11:18:09.921203    5842 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:09.921225    5842 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:09.921236    5842 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:09.921300    5842 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:09.921304    5842 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:09.921363    5842 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/embed-certs-711000/config.json ...
	I0706 11:18:09.921375    5842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/embed-certs-711000/config.json: {Name:mka900f3fb20077931e6004b9b75f53a9ca0c9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:18:09.921578    5842 start.go:365] acquiring machines lock for embed-certs-711000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:09.921607    5842 start.go:369] acquired machines lock for "embed-certs-711000" in 23.167µs
	I0706 11:18:09.921618    5842 start.go:93] Provisioning new machine with config: &{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:09.921644    5842 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:09.930150    5842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:09.946293    5842 start.go:159] libmachine.API.Create for "embed-certs-711000" (driver="qemu2")
	I0706 11:18:09.946318    5842 client.go:168] LocalClient.Create starting
	I0706 11:18:09.946371    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:09.946392    5842 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:09.946400    5842 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:09.946435    5842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:09.946450    5842 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:09.946469    5842 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:09.946760    5842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:10.059233    5842 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:10.096980    5842 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:10.096987    5842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:10.097167    5842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:10.105866    5842 main.go:141] libmachine: STDOUT: 
	I0706 11:18:10.105883    5842 main.go:141] libmachine: STDERR: 
	I0706 11:18:10.105935    5842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2 +20000M
	I0706 11:18:10.113076    5842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:10.113089    5842 main.go:141] libmachine: STDERR: 
	I0706 11:18:10.113106    5842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:10.113113    5842 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:10.113153    5842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b8:46:21:fd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:10.114694    5842 main.go:141] libmachine: STDOUT: 
	I0706 11:18:10.114707    5842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:10.114726    5842 client.go:171] LocalClient.Create took 168.405542ms
	I0706 11:18:12.116923    5842 start.go:128] duration metric: createHost completed in 2.195254417s
	I0706 11:18:12.117017    5842 start.go:83] releasing machines lock for "embed-certs-711000", held for 2.195405042s
	W0706 11:18:12.117153    5842 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:12.125717    5842 out.go:177] * Deleting "embed-certs-711000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-712000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-712000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-492000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-492000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.742435375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-492000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-492000 in cluster default-k8s-diff-port-492000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:13.015946    5884 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:13.016049    5884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:13.016053    5884 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:13.016055    5884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:13.016127    5884 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:13.017192    5884 out.go:303] Setting JSON to false
	I0706 11:18:13.032530    5884 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2865,"bootTime":1688664628,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:13.032603    5884 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:13.037649    5884 out.go:177] * [default-k8s-diff-port-492000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:13.044700    5884 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:13.048634    5884 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:13.044775    5884 notify.go:220] Checking for updates...
	I0706 11:18:13.055648    5884 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:13.058649    5884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:13.061647    5884 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:13.064676    5884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:13.067938    5884 config.go:182] Loaded profile config "embed-certs-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:13.068004    5884 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:13.068048    5884 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:13.072642    5884 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:18:13.079573    5884 start.go:297] selected driver: qemu2
	I0706 11:18:13.079579    5884 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:18:13.079585    5884 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:13.081606    5884 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 11:18:13.084724    5884 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:18:13.087722    5884 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:18:13.087741    5884 cni.go:84] Creating CNI manager for ""
	I0706 11:18:13.087747    5884 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:13.087750    5884 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:18:13.087753    5884 start_flags.go:319] config:
	{Name:default-k8s-diff-port-492000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-492000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:13.091776    5884 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:13.098611    5884 out.go:177] * Starting control plane node default-k8s-diff-port-492000 in cluster default-k8s-diff-port-492000
	I0706 11:18:13.102500    5884 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:13.102530    5884 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:13.102546    5884 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:13.102623    5884 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:13.102628    5884 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:13.102708    5884 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/default-k8s-diff-port-492000/config.json ...
	I0706 11:18:13.102720    5884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/default-k8s-diff-port-492000/config.json: {Name:mkedb2cbdf16e12c34a679684e65aba53549ebbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:18:13.102915    5884 start.go:365] acquiring machines lock for default-k8s-diff-port-492000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:13.102944    5884 start.go:369] acquired machines lock for "default-k8s-diff-port-492000" in 23.041µs
	I0706 11:18:13.102956    5884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-492000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:13.102986    5884 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:13.110506    5884 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:13.126205    5884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-492000" (driver="qemu2")
	I0706 11:18:13.126230    5884 client.go:168] LocalClient.Create starting
	I0706 11:18:13.126285    5884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:13.126306    5884 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:13.126318    5884 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:13.126365    5884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:13.126380    5884 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:13.126391    5884 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:13.126726    5884 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:13.234275    5884 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:13.361956    5884 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:13.361962    5884 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:13.362103    5884 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:13.370743    5884 main.go:141] libmachine: STDOUT: 
	I0706 11:18:13.370760    5884 main.go:141] libmachine: STDERR: 
	I0706 11:18:13.370811    5884 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2 +20000M
	I0706 11:18:13.377940    5884 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:13.377952    5884 main.go:141] libmachine: STDERR: 
	I0706 11:18:13.377970    5884 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:13.377978    5884 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:13.378009    5884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:cf:3d:ff:55:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:13.379608    5884 main.go:141] libmachine: STDOUT: 
	I0706 11:18:13.379621    5884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:13.379637    5884 client.go:171] LocalClient.Create took 253.401375ms
	I0706 11:18:15.381788    5884 start.go:128] duration metric: createHost completed in 2.278793167s
	I0706 11:18:15.381846    5884 start.go:83] releasing machines lock for "default-k8s-diff-port-492000", held for 2.278899084s
	W0706 11:18:15.381917    5884 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:15.392253    5884 out.go:177] * Deleting "default-k8s-diff-port-492000" in qemu2 ...
	W0706 11:18:15.412095    5884 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:15.412126    5884 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:20.414414    5884 start.go:365] acquiring machines lock for default-k8s-diff-port-492000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:20.414828    5884 start.go:369] acquired machines lock for "default-k8s-diff-port-492000" in 322.375µs
	I0706 11:18:20.414932    5884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-492000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:20.415225    5884 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:20.424696    5884 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:20.473012    5884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-492000" (driver="qemu2")
	I0706 11:18:20.473062    5884 client.go:168] LocalClient.Create starting
	I0706 11:18:20.473173    5884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:20.473226    5884 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:20.473257    5884 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:20.473334    5884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:20.473366    5884 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:20.473381    5884 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:20.473940    5884 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:20.599617    5884 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:20.673669    5884 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:20.673675    5884 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:20.673825    5884 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:20.682543    5884 main.go:141] libmachine: STDOUT: 
	I0706 11:18:20.682556    5884 main.go:141] libmachine: STDERR: 
	I0706 11:18:20.682618    5884 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2 +20000M
	I0706 11:18:20.689796    5884 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:20.689807    5884 main.go:141] libmachine: STDERR: 
	I0706 11:18:20.689818    5884 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:20.689824    5884 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:20.689863    5884 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e3:2f:6d:45:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:20.691384    5884 main.go:141] libmachine: STDOUT: 
	I0706 11:18:20.691397    5884 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:20.691417    5884 client.go:171] LocalClient.Create took 218.345125ms
	I0706 11:18:22.693618    5884 start.go:128] duration metric: createHost completed in 2.278316667s
	I0706 11:18:22.693682    5884 start.go:83] releasing machines lock for "default-k8s-diff-port-492000", held for 2.278838583s
	W0706 11:18:22.694236    5884 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:22.702849    5884 out.go:177] 
	W0706 11:18:22.705924    5884 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:22.705956    5884 out.go:239] * 
	* 
	W0706 11:18:22.708697    5884 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:22.717851    5884 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-492000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (68.1015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-711000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-711000 create -f testdata/busybox.yaml: exit status 1 (29.943333ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-711000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (31.976542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (28.511958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-711000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-711000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-711000 describe deploy/metrics-server -n kube-system: exit status 1 (25.549083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-711000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-711000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (28.0245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-711000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-711000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.182997458s)

                                                
                                                
-- stdout --
	* [embed-certs-711000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-711000 in cluster embed-certs-711000
	* Restarting existing qemu2 VM for "embed-certs-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-711000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:20.023189    5920 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:20.023297    5920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:20.023300    5920 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:20.023302    5920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:20.023371    5920 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:20.024349    5920 out.go:303] Setting JSON to false
	I0706 11:18:20.039604    5920 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2872,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:20.039696    5920 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:20.045006    5920 out.go:177] * [embed-certs-711000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:20.051932    5920 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:20.055930    5920 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:20.052001    5920 notify.go:220] Checking for updates...
	I0706 11:18:20.062915    5920 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:20.065968    5920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:20.068844    5920 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:20.071935    5920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:20.075239    5920 config.go:182] Loaded profile config "embed-certs-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:20.075486    5920 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:20.079891    5920 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:18:20.086964    5920 start.go:297] selected driver: qemu2
	I0706 11:18:20.086971    5920 start.go:944] validating driver "qemu2" against &{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:20.087043    5920 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:20.089076    5920 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:18:20.089104    5920 cni.go:84] Creating CNI manager for ""
	I0706 11:18:20.089111    5920 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:20.089117    5920 start_flags.go:319] config:
	{Name:embed-certs-711000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-711000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:20.093119    5920 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:20.099933    5920 out.go:177] * Starting control plane node embed-certs-711000 in cluster embed-certs-711000
	I0706 11:18:20.103808    5920 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:20.103831    5920 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:20.103845    5920 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:20.103900    5920 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:20.103907    5920 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:20.103971    5920 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/embed-certs-711000/config.json ...
	I0706 11:18:20.104336    5920 start.go:365] acquiring machines lock for embed-certs-711000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:20.104364    5920 start.go:369] acquired machines lock for "embed-certs-711000" in 22µs
	I0706 11:18:20.104373    5920 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:20.104378    5920 fix.go:54] fixHost starting: 
	I0706 11:18:20.104491    5920 fix.go:102] recreateIfNeeded on embed-certs-711000: state=Stopped err=<nil>
	W0706 11:18:20.104499    5920 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:20.112800    5920 out.go:177] * Restarting existing qemu2 VM for "embed-certs-711000" ...
	I0706 11:18:20.116944    5920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:66:c1:a4:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:20.118740    5920 main.go:141] libmachine: STDOUT: 
	I0706 11:18:20.118755    5920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:20.118787    5920 fix.go:56] fixHost completed within 14.410667ms
	I0706 11:18:20.118792    5920 start.go:83] releasing machines lock for "embed-certs-711000", held for 14.423958ms
	W0706 11:18:20.118799    5920 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:20.118828    5920 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:20.118832    5920 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:25.120934    5920 start.go:365] acquiring machines lock for embed-certs-711000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:25.121433    5920 start.go:369] acquired machines lock for "embed-certs-711000" in 368.791µs
	I0706 11:18:25.121603    5920 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:25.121623    5920 fix.go:54] fixHost starting: 
	I0706 11:18:25.122496    5920 fix.go:102] recreateIfNeeded on embed-certs-711000: state=Stopped err=<nil>
	W0706 11:18:25.122522    5920 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:25.127964    5920 out.go:177] * Restarting existing qemu2 VM for "embed-certs-711000" ...
	I0706 11:18:25.137042    5920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ff:66:c1:a4:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/embed-certs-711000/disk.qcow2
	I0706 11:18:25.145901    5920 main.go:141] libmachine: STDOUT: 
	I0706 11:18:25.145945    5920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:25.146056    5920 fix.go:56] fixHost completed within 24.431583ms
	I0706 11:18:25.146078    5920 start.go:83] releasing machines lock for "embed-certs-711000", held for 24.603459ms
	W0706 11:18:25.146306    5920 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-711000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:25.152190    5920 out.go:177] 
	W0706 11:18:25.155908    5920 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:25.155938    5920 out.go:239] * 
	* 
	W0706 11:18:25.157717    5920 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:25.166867    5920 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-711000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (64.738292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-492000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-492000 create -f testdata/busybox.yaml: exit status 1 (29.392625ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-492000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (27.958333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (29.016667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-492000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-492000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-492000 describe deploy/metrics-server -n kube-system: exit status 1 (25.995041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-492000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-492000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (28.8705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-492000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-492000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.16928875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-492000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-492000 in cluster default-k8s-diff-port-492000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-492000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:23.171198    5949 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:23.171298    5949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:23.171302    5949 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:23.171304    5949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:23.171372    5949 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:23.172341    5949 out.go:303] Setting JSON to false
	I0706 11:18:23.187426    5949 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2875,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:23.187477    5949 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:23.192453    5949 out.go:177] * [default-k8s-diff-port-492000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:23.200437    5949 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:23.200460    5949 notify.go:220] Checking for updates...
	I0706 11:18:23.204406    5949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:23.208354    5949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:23.211375    5949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:23.214383    5949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:23.217361    5949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:23.220653    5949 config.go:182] Loaded profile config "default-k8s-diff-port-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:23.220904    5949 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:23.225371    5949 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:18:23.232360    5949 start.go:297] selected driver: qemu2
	I0706 11:18:23.232370    5949 start.go:944] validating driver "qemu2" against &{Name:default-k8s-diff-port-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-492000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:23.232419    5949 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:23.234423    5949 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 11:18:23.234452    5949 cni.go:84] Creating CNI manager for ""
	I0706 11:18:23.234459    5949 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:23.234466    5949 start_flags.go:319] config:
	{Name:default-k8s-diff-port-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-4920
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:23.238099    5949 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:23.245361    5949 out.go:177] * Starting control plane node default-k8s-diff-port-492000 in cluster default-k8s-diff-port-492000
	I0706 11:18:23.249390    5949 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:23.249426    5949 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:23.249436    5949 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:23.249491    5949 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:23.249497    5949 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:23.249555    5949 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/default-k8s-diff-port-492000/config.json ...
	I0706 11:18:23.249847    5949 start.go:365] acquiring machines lock for default-k8s-diff-port-492000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:23.249874    5949 start.go:369] acquired machines lock for "default-k8s-diff-port-492000" in 20.834µs
	I0706 11:18:23.249884    5949 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:23.249890    5949 fix.go:54] fixHost starting: 
	I0706 11:18:23.250024    5949 fix.go:102] recreateIfNeeded on default-k8s-diff-port-492000: state=Stopped err=<nil>
	W0706 11:18:23.250031    5949 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:23.255382    5949 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-492000" ...
	I0706 11:18:23.263402    5949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e3:2f:6d:45:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:23.265233    5949 main.go:141] libmachine: STDOUT: 
	I0706 11:18:23.265249    5949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:23.265282    5949 fix.go:56] fixHost completed within 15.393791ms
	I0706 11:18:23.265286    5949 start.go:83] releasing machines lock for "default-k8s-diff-port-492000", held for 15.407625ms
	W0706 11:18:23.265292    5949 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:23.265327    5949 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:23.265332    5949 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:28.267450    5949 start.go:365] acquiring machines lock for default-k8s-diff-port-492000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:28.267687    5949 start.go:369] acquired machines lock for "default-k8s-diff-port-492000" in 175.333µs
	I0706 11:18:28.267774    5949 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:28.267788    5949 fix.go:54] fixHost starting: 
	I0706 11:18:28.268204    5949 fix.go:102] recreateIfNeeded on default-k8s-diff-port-492000: state=Stopped err=<nil>
	W0706 11:18:28.268234    5949 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:28.275755    5949 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-492000" ...
	I0706 11:18:28.279676    5949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e3:2f:6d:45:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/default-k8s-diff-port-492000/disk.qcow2
	I0706 11:18:28.283799    5949 main.go:141] libmachine: STDOUT: 
	I0706 11:18:28.283827    5949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:28.283870    5949 fix.go:56] fixHost completed within 16.084625ms
	I0706 11:18:28.283879    5949 start.go:83] releasing machines lock for "default-k8s-diff-port-492000", held for 16.176834ms
	W0706 11:18:28.283987    5949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-492000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:28.292625    5949 out.go:177] 
	W0706 11:18:28.295591    5949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:28.295602    5949 out.go:239] * 
	* 
	W0706 11:18:28.296520    5949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:28.306580    5949 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-492000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (33.406375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-711000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (32.496166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-711000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-711000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-711000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.099959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-711000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-711000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (28.597333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-711000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-711000 "sudo crictl images -o json": exit status 89 (37.493042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-711000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-711000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-711000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (27.907375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-711000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-711000 --alsologtostderr -v=1: exit status 89 (39.653833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-711000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:25.427059    5971 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:25.427187    5971 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:25.427190    5971 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:25.427193    5971 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:25.427260    5971 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:25.427472    5971 out.go:303] Setting JSON to false
	I0706 11:18:25.427480    5971 mustload.go:65] Loading cluster: embed-certs-711000
	I0706 11:18:25.427645    5971 config.go:182] Loaded profile config "embed-certs-711000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:25.432049    5971 out.go:177] * The control plane node must be running for this command
	I0706 11:18:25.435360    5971 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-711000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-711000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (28.378375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (28.114541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-711000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-438000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-438000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.684638917s)

                                                
                                                
-- stdout --
	* [newest-cni-438000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-438000 in cluster newest-cni-438000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-438000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:25.882475    5994 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:25.882581    5994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:25.882584    5994 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:25.882587    5994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:25.882654    5994 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:25.883708    5994 out.go:303] Setting JSON to false
	I0706 11:18:25.898850    5994 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2877,"bootTime":1688664628,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:25.898919    5994 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:25.902530    5994 out.go:177] * [newest-cni-438000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:25.909528    5994 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:25.909599    5994 notify.go:220] Checking for updates...
	I0706 11:18:25.912497    5994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:25.916500    5994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:25.919501    5994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:25.922462    5994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:25.925471    5994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:25.928826    5994 config.go:182] Loaded profile config "default-k8s-diff-port-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:25.928888    5994 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:25.928930    5994 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:25.933423    5994 out.go:177] * Using the qemu2 driver based on user configuration
	I0706 11:18:25.940461    5994 start.go:297] selected driver: qemu2
	I0706 11:18:25.940467    5994 start.go:944] validating driver "qemu2" against <nil>
	I0706 11:18:25.940473    5994 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:25.942357    5994 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0706 11:18:25.942374    5994 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0706 11:18:25.954445    5994 out.go:177] * Automatically selected the socket_vmnet network
	I0706 11:18:25.957610    5994 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0706 11:18:25.957632    5994 cni.go:84] Creating CNI manager for ""
	I0706 11:18:25.957647    5994 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:25.957651    5994 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 11:18:25.957656    5994 start_flags.go:319] config:
	{Name:newest-cni-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:25.961784    5994 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:25.968492    5994 out.go:177] * Starting control plane node newest-cni-438000 in cluster newest-cni-438000
	I0706 11:18:25.972330    5994 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:25.972357    5994 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:25.972371    5994 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:25.972428    5994 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:25.972433    5994 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:25.972506    5994 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/newest-cni-438000/config.json ...
	I0706 11:18:25.972522    5994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/newest-cni-438000/config.json: {Name:mk7f4e3d9fd1658e896d4e25afffe799c8873c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 11:18:25.972729    5994 start.go:365] acquiring machines lock for newest-cni-438000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:25.972760    5994 start.go:369] acquired machines lock for "newest-cni-438000" in 25.583µs
	I0706 11:18:25.972773    5994 start.go:93] Provisioning new machine with config: &{Name:newest-cni-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:25.972804    5994 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:25.981346    5994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:25.997198    5994 start.go:159] libmachine.API.Create for "newest-cni-438000" (driver="qemu2")
	I0706 11:18:25.997224    5994 client.go:168] LocalClient.Create starting
	I0706 11:18:25.997284    5994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:25.997303    5994 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:25.997313    5994 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:25.997357    5994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:25.997374    5994 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:25.997382    5994 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:25.998033    5994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:26.137097    5994 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:26.210776    5994 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:26.210782    5994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:26.210953    5994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:26.219429    5994 main.go:141] libmachine: STDOUT: 
	I0706 11:18:26.219443    5994 main.go:141] libmachine: STDERR: 
	I0706 11:18:26.219497    5994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2 +20000M
	I0706 11:18:26.226593    5994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:26.226605    5994 main.go:141] libmachine: STDERR: 
	I0706 11:18:26.226621    5994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:26.226627    5994 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:26.226669    5994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:eb:98:39:bb:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:26.228186    5994 main.go:141] libmachine: STDOUT: 
	I0706 11:18:26.228199    5994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:26.228220    5994 client.go:171] LocalClient.Create took 230.9905ms
	I0706 11:18:28.230403    5994 start.go:128] duration metric: createHost completed in 2.257574917s
	I0706 11:18:28.230471    5994 start.go:83] releasing machines lock for "newest-cni-438000", held for 2.257709166s
	W0706 11:18:28.230540    5994 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:28.237705    5994 out.go:177] * Deleting "newest-cni-438000" in qemu2 ...
	W0706 11:18:28.259920    5994 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:28.259962    5994 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:33.262231    5994 start.go:365] acquiring machines lock for newest-cni-438000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:33.262815    5994 start.go:369] acquired machines lock for "newest-cni-438000" in 443.583µs
	I0706 11:18:33.262952    5994 start.go:93] Provisioning new machine with config: &{Name:newest-cni-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 11:18:33.263309    5994 start.go:125] createHost starting for "" (driver="qemu2")
	I0706 11:18:33.268985    5994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 11:18:33.316671    5994 start.go:159] libmachine.API.Create for "newest-cni-438000" (driver="qemu2")
	I0706 11:18:33.316726    5994 client.go:168] LocalClient.Create starting
	I0706 11:18:33.316906    5994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/ca.pem
	I0706 11:18:33.316986    5994 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:33.317007    5994 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:33.317096    5994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15452-1247/.minikube/certs/cert.pem
	I0706 11:18:33.317133    5994 main.go:141] libmachine: Decoding PEM data...
	I0706 11:18:33.317149    5994 main.go:141] libmachine: Parsing certificate...
	I0706 11:18:33.317753    5994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso...
	I0706 11:18:33.443414    5994 main.go:141] libmachine: Creating SSH key...
	I0706 11:18:33.485129    5994 main.go:141] libmachine: Creating Disk image...
	I0706 11:18:33.485135    5994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0706 11:18:33.485287    5994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:33.493940    5994 main.go:141] libmachine: STDOUT: 
	I0706 11:18:33.493953    5994 main.go:141] libmachine: STDERR: 
	I0706 11:18:33.494026    5994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2 +20000M
	I0706 11:18:33.501158    5994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0706 11:18:33.501170    5994 main.go:141] libmachine: STDERR: 
	I0706 11:18:33.501180    5994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:33.501185    5994 main.go:141] libmachine: Starting QEMU VM...
	I0706 11:18:33.501230    5994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f1:f4:eb:2b:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:33.502778    5994 main.go:141] libmachine: STDOUT: 
	I0706 11:18:33.502791    5994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:33.502802    5994 client.go:171] LocalClient.Create took 186.068208ms
	I0706 11:18:35.504944    5994 start.go:128] duration metric: createHost completed in 2.241618208s
	I0706 11:18:35.505011    5994 start.go:83] releasing machines lock for "newest-cni-438000", held for 2.242179583s
	W0706 11:18:35.505392    5994 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:35.512018    5994 out.go:177] 
	W0706 11:18:35.516081    5994 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:35.516106    5994 out.go:239] * 
	* 
	W0706 11:18:35.518863    5994 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:35.527929    5994 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-438000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000: exit status 7 (68.755042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-492000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (30.791583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-492000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-492000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-492000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.689458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-492000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-492000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (28.124625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-492000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-492000 "sudo crictl images -o json": exit status 89 (40.006792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-492000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-492000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-492000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (27.905084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-492000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-492000 --alsologtostderr -v=1: exit status 89 (38.78725ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-492000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:28.529176    6019 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:28.529294    6019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:28.529296    6019 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:28.529299    6019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:28.529367    6019 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:28.529565    6019 out.go:303] Setting JSON to false
	I0706 11:18:28.529574    6019 mustload.go:65] Loading cluster: default-k8s-diff-port-492000
	I0706 11:18:28.529736    6019 config.go:182] Loaded profile config "default-k8s-diff-port-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:28.532619    6019 out.go:177] * The control plane node must be running for this command
	I0706 11:18:28.536706    6019 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-492000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-492000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (27.938375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (28.21975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-492000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-438000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-438000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.177467375s)

                                                
                                                
-- stdout --
	* [newest-cni-438000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-438000 in cluster newest-cni-438000
	* Restarting existing qemu2 VM for "newest-cni-438000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-438000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:35.853797    6056 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:35.853903    6056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:35.853906    6056 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:35.853908    6056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:35.853980    6056 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:35.854919    6056 out.go:303] Setting JSON to false
	I0706 11:18:35.870180    6056 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2887,"bootTime":1688664628,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:18:35.870241    6056 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:18:35.874168    6056 out.go:177] * [newest-cni-438000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:18:35.881105    6056 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:18:35.881153    6056 notify.go:220] Checking for updates...
	I0706 11:18:35.885140    6056 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:18:35.888993    6056 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:18:35.892107    6056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:18:35.895123    6056 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:18:35.898110    6056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:18:35.901443    6056 config.go:182] Loaded profile config "newest-cni-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:35.901689    6056 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:18:35.906214    6056 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:18:35.913157    6056 start.go:297] selected driver: qemu2
	I0706 11:18:35.913164    6056 start.go:944] validating driver "qemu2" against &{Name:newest-cni-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:35.913222    6056 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:18:35.915208    6056 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0706 11:18:35.915230    6056 cni.go:84] Creating CNI manager for ""
	I0706 11:18:35.915236    6056 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 11:18:35.915240    6056 start_flags.go:319] config:
	{Name:newest-cni-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-438000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:18:35.920210    6056 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 11:18:35.926063    6056 out.go:177] * Starting control plane node newest-cni-438000 in cluster newest-cni-438000
	I0706 11:18:35.930140    6056 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 11:18:35.930163    6056 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 11:18:35.930174    6056 cache.go:57] Caching tarball of preloaded images
	I0706 11:18:35.930229    6056 preload.go:174] Found /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0706 11:18:35.930237    6056 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 11:18:35.930304    6056 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/newest-cni-438000/config.json ...
	I0706 11:18:35.930661    6056 start.go:365] acquiring machines lock for newest-cni-438000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:35.930686    6056 start.go:369] acquired machines lock for "newest-cni-438000" in 18.959µs
	I0706 11:18:35.930695    6056 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:35.930700    6056 fix.go:54] fixHost starting: 
	I0706 11:18:35.930814    6056 fix.go:102] recreateIfNeeded on newest-cni-438000: state=Stopped err=<nil>
	W0706 11:18:35.930822    6056 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:35.935056    6056 out.go:177] * Restarting existing qemu2 VM for "newest-cni-438000" ...
	I0706 11:18:35.943159    6056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f1:f4:eb:2b:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:35.944956    6056 main.go:141] libmachine: STDOUT: 
	I0706 11:18:35.944968    6056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:35.944994    6056 fix.go:56] fixHost completed within 14.294416ms
	I0706 11:18:35.945002    6056 start.go:83] releasing machines lock for "newest-cni-438000", held for 14.312333ms
	W0706 11:18:35.945009    6056 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:35.945050    6056 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:35.945054    6056 start.go:687] Will try again in 5 seconds ...
	I0706 11:18:40.947242    6056 start.go:365] acquiring machines lock for newest-cni-438000: {Name:mk52cc0c03d09e5d0472317cc9b3d00d30aca182 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 11:18:40.947754    6056 start.go:369] acquired machines lock for "newest-cni-438000" in 399.667µs
	I0706 11:18:40.947923    6056 start.go:96] Skipping create...Using existing machine configuration
	I0706 11:18:40.947941    6056 fix.go:54] fixHost starting: 
	I0706 11:18:40.948667    6056 fix.go:102] recreateIfNeeded on newest-cni-438000: state=Stopped err=<nil>
	W0706 11:18:40.948693    6056 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 11:18:40.953136    6056 out.go:177] * Restarting existing qemu2 VM for "newest-cni-438000" ...
	I0706 11:18:40.958396    6056 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f1:f4:eb:2b:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15452-1247/.minikube/machines/newest-cni-438000/disk.qcow2
	I0706 11:18:40.967912    6056 main.go:141] libmachine: STDOUT: 
	I0706 11:18:40.967982    6056 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0706 11:18:40.968071    6056 fix.go:56] fixHost completed within 20.130542ms
	I0706 11:18:40.968093    6056 start.go:83] releasing machines lock for "newest-cni-438000", held for 20.317625ms
	W0706 11:18:40.968345    6056 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-438000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-438000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0706 11:18:40.977041    6056 out.go:177] 
	W0706 11:18:40.981239    6056 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0706 11:18:40.981269    6056 out.go:239] * 
	* 
	W0706 11:18:40.983616    6056 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 11:18:40.992055    6056 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-438000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000: exit status 7 (67.207583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-438000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-438000 "sudo crictl images -o json": exit status 89 (43.443208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-438000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-438000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-438000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000: exit status 7 (28.42075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-438000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-438000 --alsologtostderr -v=1: exit status 89 (40.974833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-438000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:18:41.175116    6070 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:18:41.175252    6070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:41.175254    6070 out.go:309] Setting ErrFile to fd 2...
	I0706 11:18:41.175257    6070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:18:41.175329    6070 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:18:41.175537    6070 out.go:303] Setting JSON to false
	I0706 11:18:41.175546    6070 mustload.go:65] Loading cluster: newest-cni-438000
	I0706 11:18:41.175737    6070 config.go:182] Loaded profile config "newest-cni-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:18:41.180395    6070 out.go:177] * The control plane node must be running for this command
	I0706 11:18:41.184509    6070 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-438000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-438000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000: exit status 7 (28.556083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-438000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000: exit status 7 (29.001333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-438000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (134/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.3/json-events 9.02
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
19 TestBinaryMirror 0.33
30 TestHyperKitDriverInstallOrUpdate 8.36
33 TestErrorSpam/setup 28.79
34 TestErrorSpam/start 0.37
35 TestErrorSpam/status 0.24
36 TestErrorSpam/pause 0.61
37 TestErrorSpam/unpause 0.57
38 TestErrorSpam/stop 3.22
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 84.13
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 37.27
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.06
49 TestFunctional/serial/CacheCmd/cache/add_remote 5.91
50 TestFunctional/serial/CacheCmd/cache/add_local 3.17
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.46
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
58 TestFunctional/serial/ExtraConfig 34.6
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.67
61 TestFunctional/serial/LogsFileCmd 0.6
62 TestFunctional/serial/InvalidService 4.65
64 TestFunctional/parallel/ConfigCmd 0.21
65 TestFunctional/parallel/DashboardCmd 12.45
66 TestFunctional/parallel/DryRun 0.21
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.26
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 25.09
76 TestFunctional/parallel/SSHCmd 0.14
77 TestFunctional/parallel/CpCmd 0.3
79 TestFunctional/parallel/FileSync 0.07
80 TestFunctional/parallel/CertSync 0.44
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
88 TestFunctional/parallel/License 0.3
90 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
91 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
93 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
94 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
95 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
96 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
97 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
98 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
100 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
101 TestFunctional/parallel/ServiceCmd/List 0.33
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
104 TestFunctional/parallel/ServiceCmd/Format 0.12
105 TestFunctional/parallel/ServiceCmd/URL 0.11
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
107 TestFunctional/parallel/ProfileCmd/profile_list 0.15
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
109 TestFunctional/parallel/MountCmd/any-port 5.44
110 TestFunctional/parallel/MountCmd/specific-port 0.95
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.28
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.35
119 TestFunctional/parallel/ImageCommands/Setup 2.44
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.46
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.59
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.22
123 TestFunctional/parallel/DockerEnv/bash 0.6
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.6
131 TestFunctional/delete_addon-resizer_images 0.11
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 29.23
138 TestImageBuild/serial/NormalBuild 1.65
140 TestImageBuild/serial/BuildWithDockerIgnore 0.12
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 64.08
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.34
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.21
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 117.22
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.32
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 62.38
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.14
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
258 TestStartStop/group/old-k8s-version/serial/Stop 0.07
259 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
269 TestStartStop/group/no-preload/serial/Stop 0.06
270 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
289 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-524000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-524000: exit status 85 (92.064083ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-524000 | jenkins | v1.30.1 | 06 Jul 23 10:56 PDT |          |
	|         | -p download-only-524000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 10:56:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 10:56:11.799791    2467 out.go:296] Setting OutFile to fd 1 ...
	I0706 10:56:11.799923    2467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:11.799926    2467 out.go:309] Setting ErrFile to fd 2...
	I0706 10:56:11.799929    2467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:11.800028    2467 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	W0706 10:56:11.800104    2467 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15452-1247/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15452-1247/.minikube/config/config.json: no such file or directory
	I0706 10:56:11.801259    2467 out.go:303] Setting JSON to true
	I0706 10:56:11.817606    2467 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1543,"bootTime":1688664628,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 10:56:11.817682    2467 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 10:56:11.823247    2467 out.go:97] [download-only-524000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 10:56:11.824518    2467 out.go:169] MINIKUBE_LOCATION=15452
	I0706 10:56:11.823379    2467 notify.go:220] Checking for updates...
	W0706 10:56:11.823407    2467 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball: no such file or directory
	I0706 10:56:11.831288    2467 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 10:56:11.834189    2467 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 10:56:11.837241    2467 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 10:56:11.840290    2467 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	W0706 10:56:11.845173    2467 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0706 10:56:11.845374    2467 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 10:56:11.849268    2467 out.go:97] Using the qemu2 driver based on user configuration
	I0706 10:56:11.849276    2467 start.go:297] selected driver: qemu2
	I0706 10:56:11.849278    2467 start.go:944] validating driver "qemu2" against <nil>
	I0706 10:56:11.849364    2467 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 10:56:11.853151    2467 out.go:169] Automatically selected the socket_vmnet network
	I0706 10:56:11.858283    2467 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0706 10:56:11.858377    2467 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 10:56:11.858432    2467 cni.go:84] Creating CNI manager for ""
	I0706 10:56:11.858451    2467 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 10:56:11.858458    2467 start_flags.go:319] config:
	{Name:download-only-524000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-524000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 10:56:11.863867    2467 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 10:56:11.867210    2467 out.go:97] Downloading VM boot image ...
	I0706 10:56:11.867252    2467 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/iso/arm64/minikube-v1.30.1-1688144767-16765-arm64.iso
	I0706 10:56:18.692505    2467 out.go:97] Starting control plane node download-only-524000 in cluster download-only-524000
	I0706 10:56:18.692532    2467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 10:56:18.744782    2467 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 10:56:18.744838    2467 cache.go:57] Caching tarball of preloaded images
	I0706 10:56:18.745019    2467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 10:56:18.749860    2467 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0706 10:56:18.749867    2467 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:18.825772    2467 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0706 10:56:28.398125    2467 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:28.398264    2467 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:29.039846    2467 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0706 10:56:29.040020    2467 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/download-only-524000/config.json ...
	I0706 10:56:29.040047    2467 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/download-only-524000/config.json: {Name:mk30c2d20c5bd9770d6187b8d336f7f3af194d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 10:56:29.040275    2467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 10:56:29.040437    2467 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0706 10:56:29.380326    2467 out.go:169] 
	W0706 10:56:29.384534    2467 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15452-1247/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0 0x1088b05b0] Decompressors:map[bz2:0x14000056f30 gz:0x14000056f38 tar:0x14000056ec0 tar.bz2:0x14000056ed0 tar.gz:0x14000056ee0 tar.xz:0x14000056ef0 tar.zst:0x14000056f20 tbz2:0x14000056ed0 tgz:0x14000056ee0 txz:0x14000056ef0 tzst:0x14000056f20 xz:0x14000056f60 zip:0x14000056f70 zst:0x14000056f68] Getters:map[file:0x14000500db0 http:0x14000ac6140 https:0x14000ac61e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0706 10:56:29.384559    2467 out_reason.go:110] 
	W0706 10:56:29.390495    2467 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 10:56:29.394478    2467 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-524000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (9.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 : (9.024686125s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (9.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-524000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-524000: exit status 85 (76.490792ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-524000 | jenkins | v1.30.1 | 06 Jul 23 10:56 PDT |          |
	|         | -p download-only-524000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-524000 | jenkins | v1.30.1 | 06 Jul 23 10:56 PDT |          |
	|         | -p download-only-524000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 10:56:29
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 10:56:29.582745    2483 out.go:296] Setting OutFile to fd 1 ...
	I0706 10:56:29.582863    2483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:29.582865    2483 out.go:309] Setting ErrFile to fd 2...
	I0706 10:56:29.582869    2483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 10:56:29.582942    2483 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	W0706 10:56:29.583007    2483 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15452-1247/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15452-1247/.minikube/config/config.json: no such file or directory
	I0706 10:56:29.583985    2483 out.go:303] Setting JSON to true
	I0706 10:56:29.599852    2483 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1561,"bootTime":1688664628,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 10:56:29.599926    2483 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 10:56:29.604900    2483 out.go:97] [download-only-524000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 10:56:29.608819    2483 out.go:169] MINIKUBE_LOCATION=15452
	I0706 10:56:29.605025    2483 notify.go:220] Checking for updates...
	I0706 10:56:29.614875    2483 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 10:56:29.617857    2483 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 10:56:29.620854    2483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 10:56:29.623886    2483 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	W0706 10:56:29.629848    2483 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0706 10:56:29.630123    2483 config.go:182] Loaded profile config "download-only-524000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0706 10:56:29.630158    2483 start.go:852] api.Load failed for download-only-524000: filestore "download-only-524000": Docker machine "download-only-524000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0706 10:56:29.630202    2483 driver.go:373] Setting default libvirt URI to qemu:///system
	W0706 10:56:29.630215    2483 start.go:852] api.Load failed for download-only-524000: filestore "download-only-524000": Docker machine "download-only-524000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0706 10:56:29.633794    2483 out.go:97] Using the qemu2 driver based on existing profile
	I0706 10:56:29.633801    2483 start.go:297] selected driver: qemu2
	I0706 10:56:29.633805    2483 start.go:944] validating driver "qemu2" against &{Name:download-only-524000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-524000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 10:56:29.635899    2483 cni.go:84] Creating CNI manager for ""
	I0706 10:56:29.635912    2483 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0706 10:56:29.635917    2483 start_flags.go:319] config:
	{Name:download-only-524000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-524000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 10:56:29.639825    2483 iso.go:125] acquiring lock: {Name:mke59e310d8442c15dfd2aa2fc104ac3810ed5aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 10:56:29.642887    2483 out.go:97] Starting control plane node download-only-524000 in cluster download-only-524000
	I0706 10:56:29.642895    2483 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 10:56:29.697666    2483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0706 10:56:29.697678    2483 cache.go:57] Caching tarball of preloaded images
	I0706 10:56:29.697842    2483 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 10:56:29.701128    2483 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0706 10:56:29.701135    2483 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0706 10:56:29.779923    2483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4?checksum=md5:e061b1178966dc348ac19219444153f4 -> /Users/jenkins/minikube-integration/15452-1247/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-524000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-524000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-142000 --alsologtostderr --binary-mirror http://127.0.0.1:49315 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-142000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-142000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.36s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0706 11:11:39.435646    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/ingress-addon-legacy-946000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (8.36s)

                                                
                                    
x
+
TestErrorSpam/setup (28.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-321000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-321000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 --driver=qemu2 : (28.788769292s)
--- PASS: TestErrorSpam/setup (28.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 pause
--- PASS: TestErrorSpam/pause (0.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (3.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 stop: (3.067297875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-321000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-321000 stop
--- PASS: TestErrorSpam/stop (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/15452-1247/.minikube/files/etc/test/nested/copy/2465/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-802000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-802000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m24.133209416s)
--- PASS: TestFunctional/serial/StartWithProxy (84.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-802000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-802000 --alsologtostderr -v=8: (37.27232575s)
functional_test.go:659: soft start took 37.272763708s for "functional-802000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-802000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 cache add registry.k8s.io/pause:3.1: (2.232953458s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 cache add registry.k8s.io/pause:3.3: (2.031804667s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 cache add registry.k8s.io/pause:latest: (1.64744175s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1207111450/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cache add minikube-local-cache-test:functional-802000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 cache add minikube-local-cache-test:functional-802000: (2.847148708s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cache delete minikube-local-cache-test:functional-802000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-802000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (75.659541ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 cache reload: (1.032972083s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 kubectl -- --context functional-802000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.46s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-802000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-802000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-802000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.598389542s)
functional_test.go:757: restart took 34.598509916s for "functional-802000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-802000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2165899456/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.65s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-802000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-802000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-802000: exit status 115 (163.186084ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31463 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-802000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-802000 delete -f testdata/invalidsvc.yaml: (1.351521125s)
--- PASS: TestFunctional/serial/InvalidService (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 config get cpus: exit status 14 (27.409625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 config get cpus: exit status 14 (29.26475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-802000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-802000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3117: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-802000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-802000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.778791ms)

                                                
                                                
-- stdout --
	* [functional-802000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:01:44.673509    3104 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:01:44.673638    3104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:01:44.673641    3104 out.go:309] Setting ErrFile to fd 2...
	I0706 11:01:44.673644    3104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:01:44.673715    3104 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:01:44.674696    3104 out.go:303] Setting JSON to false
	I0706 11:01:44.690299    3104 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1876,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:01:44.690359    3104 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:01:44.695717    3104 out.go:177] * [functional-802000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	I0706 11:01:44.701690    3104 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:01:44.705669    3104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:01:44.701747    3104 notify.go:220] Checking for updates...
	I0706 11:01:44.711535    3104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:01:44.714656    3104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:01:44.717673    3104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:01:44.718770    3104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:01:44.721922    3104 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:01:44.722162    3104 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:01:44.726767    3104 out.go:177] * Using the qemu2 driver based on existing profile
	I0706 11:01:44.731609    3104 start.go:297] selected driver: qemu2
	I0706 11:01:44.731615    3104 start.go:944] validating driver "qemu2" against &{Name:functional-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:01:44.731681    3104 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:01:44.737740    3104 out.go:177] 
	W0706 11:01:44.741691    3104 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0706 11:01:44.745590    3104 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-802000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-802000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-802000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.465417ms)

                                                
                                                
-- stdout --
	* [functional-802000] minikube v1.30.1 sur Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 11:01:44.559100    3100 out.go:296] Setting OutFile to fd 1 ...
	I0706 11:01:44.559223    3100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:01:44.559225    3100 out.go:309] Setting ErrFile to fd 2...
	I0706 11:01:44.559227    3100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 11:01:44.559322    3100 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
	I0706 11:01:44.560851    3100 out.go:303] Setting JSON to false
	I0706 11:01:44.577667    3100 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1876,"bootTime":1688664628,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0706 11:01:44.577743    3100 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 11:01:44.582746    3100 out.go:177] * [functional-802000] minikube v1.30.1 sur Darwin 13.4.1 (arm64)
	I0706 11:01:44.589633    3100 out.go:177]   - MINIKUBE_LOCATION=15452
	I0706 11:01:44.593706    3100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	I0706 11:01:44.589707    3100 notify.go:220] Checking for updates...
	I0706 11:01:44.599623    3100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0706 11:01:44.602716    3100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 11:01:44.605574    3100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	I0706 11:01:44.608684    3100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 11:01:44.612004    3100 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 11:01:44.612246    3100 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 11:01:44.615605    3100 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0706 11:01:44.622611    3100 start.go:297] selected driver: qemu2
	I0706 11:01:44.622619    3100 start.go:944] validating driver "qemu2" against &{Name:functional-802000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 11:01:44.622674    3100 start.go:955] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 11:01:44.629635    3100 out.go:177] 
	W0706 11:01:44.633680    3100 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0706 11:01:44.637633    3100 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [37629ed4-bff7-4f9e-9b29-ab870116f7ba] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.019819875s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-802000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-802000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-802000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-802000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [612a0b9e-5626-4b1b-8fb1-9f83bd90eada] Pending
helpers_test.go:344: "sp-pod" [612a0b9e-5626-4b1b-8fb1-9f83bd90eada] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [612a0b9e-5626-4b1b-8fb1-9f83bd90eada] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008362541s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-802000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-802000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-802000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [776211a1-4b71-4a45-9508-fe300c3ad19a] Pending
helpers_test.go:344: "sp-pod" [776211a1-4b71-4a45-9508-fe300c3ad19a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [776211a1-4b71-4a45-9508-fe300c3ad19a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.01253725s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-802000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh -n functional-802000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 cp functional-802000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd881229299/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh -n functional-802000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2465/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /etc/test/nested/copy/2465/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2465.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /etc/ssl/certs/2465.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2465.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /usr/share/ca-certificates/2465.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/24652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /etc/ssl/certs/24652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/24652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /usr/share/ca-certificates/24652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-802000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "sudo systemctl is-active crio": exit status 1 (68.844125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-802000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-802000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-802000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2921: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-802000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-802000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-802000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9dda3718-f839-47c0-bb11-824694d14962] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9dda3718-f839-47c0-bb11-824694d14962] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005549541s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-802000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.69.138 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-802000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-802000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-802000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-wkkbh" [b8c37d75-17c7-478d-8c03-4d6a2dbae92e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-wkkbh" [b8c37d75-17c7-478d-8c03-4d6a2dbae92e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.01332225s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 service list -o json
functional_test.go:1493: Took "298.354292ms" to run "out/minikube-darwin-arm64 -p functional-802000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:32399
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:32399
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "120.919333ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.703583ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "118.081083ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "31.371041ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port343938685/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1688666484482778000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port343938685/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1688666484482778000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port343938685/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1688666484482778000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port343938685/001/test-1688666484482778000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.932375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  6 18:01 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  6 18:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  6 18:01 test-1688666484482778000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh cat /mount-9p/test-1688666484482778000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-802000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [832c1af1-3510-42dc-a4eb-8299b30dde4a] Pending
helpers_test.go:344: "busybox-mount" [832c1af1-3510-42dc-a4eb-8299b30dde4a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [832c1af1-3510-42dc-a4eb-8299b30dde4a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [832c1af1-3510-42dc-a4eb-8299b30dde4a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.0110405s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-802000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port343938685/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port561555418/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (72.965166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port561555418/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "sudo umount -f /mount-9p": exit status 1 (68.366958ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-802000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port561555418/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-802000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-802000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-802000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-802000 image ls --format short --alsologtostderr:
I0706 11:02:00.791465    3278 out.go:296] Setting OutFile to fd 1 ...
I0706 11:02:00.791601    3278 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.791604    3278 out.go:309] Setting ErrFile to fd 2...
I0706 11:02:00.791606    3278 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.791681    3278 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
I0706 11:02:00.792049    3278 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.792124    3278 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.792943    3278 ssh_runner.go:195] Run: systemctl --version
I0706 11:02:00.792953    3278 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/functional-802000/id_rsa Username:docker}
I0706 11:02:00.828484    3278 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-802000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-scheduler              | v1.27.3           | bcb9e554eaab6 | 56.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/google-containers/addon-resizer      | functional-802000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 2002d33a54f72 | 192MB  |
| registry.k8s.io/kube-proxy                  | v1.27.3           | fb73e92641fd5 | 66.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.27.3           | 39dfb036b0986 | 115MB  |
| docker.io/library/nginx                     | alpine            | 66bf2c914bf4d | 41MB   |
| registry.k8s.io/kube-controller-manager     | v1.27.3           | ab3683b584ae5 | 107MB  |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-802000 | adb8f38ea820e | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-802000 image ls --format table --alsologtostderr:
I0706 11:02:00.879415    3282 out.go:296] Setting OutFile to fd 1 ...
I0706 11:02:00.879535    3282 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.879538    3282 out.go:309] Setting ErrFile to fd 2...
I0706 11:02:00.879540    3282 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.879613    3282 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
I0706 11:02:00.879968    3282 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.880023    3282 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.880724    3282 ssh_runner.go:195] Run: systemctl --version
I0706 11:02:00.880732    3282 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/functional-802000/id_rsa Username:docker}
I0706 11:02:00.914604    3282 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-802000 image ls --format json --alsologtostderr:
[{"id":"adb8f38ea820e048e95198906a129e99f6940a738e2c039d6600c3f60f37105c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-802000"],"size":"30"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfeb
e6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"115000000"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"56200000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-802000"],"size":"32900000"},{"id":"2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","r
epoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"66500000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"107000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.1"],"size":"525000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-802000 image ls --format json --alsologtostderr:
I0706 11:02:00.876541    3281 out.go:296] Setting OutFile to fd 1 ...
I0706 11:02:00.876667    3281 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.876670    3281 out.go:309] Setting ErrFile to fd 2...
I0706 11:02:00.876672    3281 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.876738    3281 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
I0706 11:02:00.877131    3281 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.877191    3281 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.878059    3281 ssh_runner.go:195] Run: systemctl --version
I0706 11:02:00.878069    3281 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/functional-802000/id_rsa Username:docker}
I0706 11:02:00.914757    3281 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0706 11:02:00.923951    3281 logFile.go:53] failed to close the audit log: invalid argument
W0706 11:02:00.923959    3281 root.go:91] failed to log command end to audit: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"e1bd1c81-eaf1-4de8-b041-7a02323d9dc2\",\"source\":\"https://minikube.sigs.k8s.io/\",\"type\":\"io.k8s.sigs.minikube.audit\",\"datacontenttype\":\"application/json\",\"data\":{\"args\":\"-o=json --download-only -p download-only-524000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 \",\"command\":\"start\",\"endTime\":\"\",\"id\":\"cc56f696-9f62-424a-8d7b-974309cfa899\",\"p": unexpected end of JSON input
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-802000 image ls --format yaml --alsologtostderr:
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "115000000"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "107000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: adb8f38ea820e048e95198906a129e99f6940a738e2c039d6600c3f60f37105c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-802000
size: "30"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "66500000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "56200000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-802000
size: "32900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-802000 image ls --format yaml --alsologtostderr:
I0706 11:02:00.791448    3277 out.go:296] Setting OutFile to fd 1 ...
I0706 11:02:00.791595    3277 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.791598    3277 out.go:309] Setting ErrFile to fd 2...
I0706 11:02:00.791600    3277 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:00.791675    3277 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
I0706 11:02:00.792106    3277 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.792169    3277 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:00.793418    3277 ssh_runner.go:195] Run: systemctl --version
I0706 11:02:00.793426    3277 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/functional-802000/id_rsa Username:docker}
I0706 11:02:00.828454    3277 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh pgrep buildkitd: exit status 1 (68.255333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image build -t localhost/my-image:functional-802000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 image build -t localhost/my-image:functional-802000 testdata/build --alsologtostderr: (2.202608709s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-802000 image build -t localhost/my-image:functional-802000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 05747f5083d8
Removing intermediate container 05747f5083d8
---> ab89c0c637a4
Step 3/3 : ADD content.txt /
---> 87a70eb411aa
Successfully built 87a70eb411aa
Successfully tagged localhost/my-image:functional-802000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-802000 image build -t localhost/my-image:functional-802000 testdata/build --alsologtostderr:
I0706 11:02:01.022800    3287 out.go:296] Setting OutFile to fd 1 ...
I0706 11:02:01.022995    3287 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:01.022998    3287 out.go:309] Setting ErrFile to fd 2...
I0706 11:02:01.023001    3287 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 11:02:01.023083    3287 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15452-1247/.minikube/bin
I0706 11:02:01.023495    3287 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:01.023914    3287 config.go:182] Loaded profile config "functional-802000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 11:02:01.024748    3287 ssh_runner.go:195] Run: systemctl --version
I0706 11:02:01.024760    3287 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15452-1247/.minikube/machines/functional-802000/id_rsa Username:docker}
I0706 11:02:01.058536    3287 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1830307439.tar
I0706 11:02:01.058591    3287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0706 11:02:01.061777    3287 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1830307439.tar
I0706 11:02:01.063566    3287 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1830307439.tar: stat -c "%s %y" /var/lib/minikube/build/build.1830307439.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1830307439.tar': No such file or directory
I0706 11:02:01.063586    3287 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1830307439.tar --> /var/lib/minikube/build/build.1830307439.tar (3072 bytes)
I0706 11:02:01.078053    3287 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1830307439
I0706 11:02:01.081134    3287 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1830307439 -xf /var/lib/minikube/build/build.1830307439.tar
I0706 11:02:01.084382    3287 docker.go:339] Building image: /var/lib/minikube/build/build.1830307439
I0706 11:02:01.084426    3287 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-802000 /var/lib/minikube/build/build.1830307439
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0706 11:02:03.184945    3287 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-802000 /var/lib/minikube/build/build.1830307439: (2.10051425s)
I0706 11:02:03.185006    3287 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1830307439
I0706 11:02:03.188120    3287 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1830307439.tar
I0706 11:02:03.190752    3287 build_images.go:207] Built localhost/my-image:functional-802000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1830307439.tar
I0706 11:02:03.190772    3287 build_images.go:123] succeeded building to: functional-802000
I0706 11:02:03.190774    3287 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.398307375s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-802000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image load --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 image load --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr: (2.378534083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image load --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 image load --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr: (1.514124333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.141566875s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-802000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image load --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr
2023/07/06 11:01:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 image load --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr: (1.959060542s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-802000 docker-env) && out/minikube-darwin-arm64 status -p functional-802000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-802000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image save gcr.io/google-containers/addon-resizer:functional-802000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image rm gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-802000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 image save --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-arm64 -p functional-802000 image save --daemon gcr.io/google-containers/addon-resizer:functional-802000 --alsologtostderr: (1.514045583s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-802000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-802000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-802000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-802000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-122000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-122000 --driver=qemu2 : (29.233329292s)
--- PASS: TestImageBuild/serial/Setup (29.23s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-122000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-122000: (1.648935292s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-122000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-122000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (64.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-946000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-946000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m4.074889375s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (64.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons enable ingress --alsologtostderr -v=5: (15.340469458s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-946000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (117.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-883000 --output=json --user=testUser
E0706 11:05:51.820078    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:51.828398    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:51.838866    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:51.861142    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:51.902709    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:51.982942    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:52.145117    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:52.467272    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:53.109573    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:54.391950    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:05:56.954281    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:06:02.076680    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:06:12.319025    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
E0706 11:06:32.801356    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-883000 --output=json --user=testUser: (1m57.220651167s)
--- PASS: TestJSONOutput/stop/Command (117.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-518000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-518000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.824875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d257e2d6-759e-47e0-9075-4c9f69dcbabc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-518000] minikube v1.30.1 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2eef8ef-ad18-4c07-89bf-f7cfc8496b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15452"}}
	{"specversion":"1.0","id":"708b523f-ed8c-4c05-bf59-2013fc763beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig"}}
	{"specversion":"1.0","id":"99a722be-46fd-4668-b8ea-12863d2a38e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"30ebe64d-81af-4568-99d5-de20fdf8e941","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5e0e01e9-a6e8-494a-bbdf-143663d24a2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube"}}
	{"specversion":"1.0","id":"27a87556-5eab-4f78-a2eb-47b7aa9a0425","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8c919a72-19f2-47ab-b2c5-2b60de2950ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-518000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-518000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (62.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-921000 --driver=qemu2 
E0706 11:07:13.763507    2465 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15452-1247/.minikube/profiles/functional-802000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-921000 --driver=qemu2 : (29.987116125s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-922000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-922000 --driver=qemu2 : (31.599926375s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-921000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-922000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-922000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-922000
helpers_test.go:175: Cleaning up "first-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-921000
--- PASS: TestMinikubeProfile (62.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-244000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (89.957417ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-244000] minikube v1.30.1 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15452
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15452-1247/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15452-1247/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-244000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-244000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.299583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-244000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-244000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-244000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-244000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (40.345792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-244000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-789000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (29.51825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-789000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-658000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-658000 -n no-preload-658000: exit status 7 (29.579875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-658000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-711000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-711000 -n embed-certs-711000: exit status 7 (28.500333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-711000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-492000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-492000 -n default-k8s-diff-port-492000: exit status 7 (28.583125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-492000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-438000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-438000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-438000 -n newest-cni-438000: exit status 7 (29.725541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-438000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1: exit status 1 (80.180833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2: exit status 1 (66.57475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2: exit status 1 (65.240916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2: exit status 1 (67.127375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2: exit status 1 (66.841792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2: exit status 1 (65.569708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-802000 ssh "findmnt -T" /mount2: exit status 1 (65.77725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-802000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3268852121/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.40s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-264000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-264000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-264000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-264000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-264000"

                                                
                                                
----------------------- debugLogs end: cilium-264000 [took: 2.28588s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-264000
--- SKIP: TestNetworkPlugins/group/cilium (2.52s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-855000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-855000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard