Test Report: QEMU_macOS 17340

                    
                      49babfe4fcdff3bcc398a25366bae00d3ae6dc66:2023-10-02:31256
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.52
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.86
22 TestAddons/Setup 18.93
23 TestCertOptions 10.04
24 TestCertExpiration 198.37
25 TestDockerFlags 10
26 TestForceSystemdFlag 9.99
27 TestForceSystemdEnv 9.93
33 TestErrorSpam/setup 17.55
72 TestFunctional/parallel/ServiceCmdConnect 29.8
139 TestImageBuild/serial/BuildWithBuildArg 1.02
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 50.84
183 TestMountStart/serial/StartWithMountFirst 10.09
186 TestMultiNode/serial/FreshStart2Nodes 9.72
187 TestMultiNode/serial/DeployApp2Nodes 93.18
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.4
195 TestMultiNode/serial/DeleteNode 0.09
196 TestMultiNode/serial/StopMultiNode 0.14
197 TestMultiNode/serial/RestartMultiNode 5.24
198 TestMultiNode/serial/ValidateNameConflict 19.7
202 TestPreload 9.86
204 TestScheduledStopUnix 9.85
205 TestSkaffold 11.95
208 TestRunningBinaryUpgrade 137.85
210 TestKubernetesUpgrade 15.37
223 TestStoppedBinaryUpgrade/Setup 157.65
232 TestPause/serial/Start 9.79
235 TestNoKubernetes/serial/StartWithK8s 10.44
236 TestStoppedBinaryUpgrade/Upgrade 3.24
237 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
238 TestNoKubernetes/serial/StartWithStopK8s 5.29
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.63
240 TestNoKubernetes/serial/Start 5.35
241 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.1
245 TestNoKubernetes/serial/StartNoArgs 7.56
247 TestNetworkPlugins/group/auto/Start 9.8
248 TestNetworkPlugins/group/kindnet/Start 9.92
249 TestNetworkPlugins/group/calico/Start 9.86
250 TestNetworkPlugins/group/custom-flannel/Start 9.73
251 TestNetworkPlugins/group/false/Start 9.71
252 TestNetworkPlugins/group/enable-default-cni/Start 9.7
253 TestNetworkPlugins/group/flannel/Start 9.69
254 TestNetworkPlugins/group/bridge/Start 9.88
255 TestNetworkPlugins/group/kubenet/Start 9.84
257 TestStartStop/group/old-k8s-version/serial/FirstStart 9.81
258 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
259 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
262 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
263 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
264 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
265 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
266 TestStartStop/group/old-k8s-version/serial/Pause 0.1
268 TestStartStop/group/no-preload/serial/FirstStart 9.87
269 TestStartStop/group/no-preload/serial/DeployApp 0.09
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
273 TestStartStop/group/no-preload/serial/SecondStart 5.25
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
275 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
276 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
277 TestStartStop/group/no-preload/serial/Pause 0.1
279 TestStartStop/group/embed-certs/serial/FirstStart 9.78
280 TestStartStop/group/embed-certs/serial/DeployApp 0.09
281 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
284 TestStartStop/group/embed-certs/serial/SecondStart 5.24
285 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
286 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
287 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
288 TestStartStop/group/embed-certs/serial/Pause 0.1
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.08
292 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.43
297 TestStartStop/group/newest-cni/serial/FirstStart 9.89
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
306 TestStartStop/group/newest-cni/serial/SecondStart 5.24
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (10.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-679000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-679000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.514368208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd9a235d-bdae-4141-8a53-3d3e71fd0bfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-679000] minikube v1.31.2 on Darwin 14.0 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a28ea770-8744-4bde-9864-53715b6da347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"18741e92-ceeb-4a03-9d54-b3b892f2c520","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig"}}
	{"specversion":"1.0","id":"3f96698c-df6b-490e-ac4b-933d1195c37f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c2cab795-69c5-409f-afde-b923f9676f25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"26b39222-beb0-49d0-93c6-7b30706d2595","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube"}}
	{"specversion":"1.0","id":"18b998ad-d987-4015-b28b-5c883b236087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"27b424a9-3a90-44f1-bfee-0eac0619e917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d3f6413-fa75-4c2f-b8a5-e526034352a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c4537eec-7b93-4d8f-a917-b769fec738bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"129562bb-1e6b-482a-99d3-8fe77f30112b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-679000 in cluster download-only-679000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4d5db8b-5b3e-460b-9df6-9351c9c1d1b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"02d96b9c-4469-4661-8486-6cc1ccd8b6bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880] Decompressors:map[bz2:0x14000196a20 gz:0x14000196a28 tar:0x14000196990 tar.bz2:0x140001969a0 tar.gz:0x140001969b0 tar.xz:0x140001969d0 tar.zst:0x140001969e0 tbz2:0x140001969a0 tgz:0x1400019
69b0 txz:0x140001969d0 tzst:0x140001969e0 xz:0x14000196a30 zip:0x14000196a40 zst:0x14000196a38] Getters:map[file:0x14000708910 http:0x140004a2640 https:0x140004a2690] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ba00f106-4d2b-4bb3-97ff-0ce8c135cab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:34:59.823596    1411 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:34:59.823742    1411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:34:59.823745    1411 out.go:309] Setting ErrFile to fd 2...
	I1002 03:34:59.823748    1411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:34:59.823857    1411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	W1002 03:34:59.823946    1411 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17340-994/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17340-994/.minikube/config/config.json: no such file or directory
	I1002 03:34:59.825093    1411 out.go:303] Setting JSON to true
	I1002 03:34:59.842620    1411 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":272,"bootTime":1696242627,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:34:59.842721    1411 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:34:59.851040    1411 out.go:97] [download-only-679000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:34:59.854079    1411 out.go:169] MINIKUBE_LOCATION=17340
	I1002 03:34:59.851196    1411 notify.go:220] Checking for updates...
	W1002 03:34:59.851189    1411 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 03:34:59.863076    1411 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:34:59.870983    1411 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:34:59.878871    1411 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:34:59.882005    1411 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	W1002 03:34:59.888091    1411 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 03:34:59.888306    1411 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:34:59.893023    1411 out.go:97] Using the qemu2 driver based on user configuration
	I1002 03:34:59.893029    1411 start.go:298] selected driver: qemu2
	I1002 03:34:59.893032    1411 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:34:59.893094    1411 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:34:59.896989    1411 out.go:169] Automatically selected the socket_vmnet network
	I1002 03:34:59.903037    1411 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1002 03:34:59.903118    1411 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:34:59.903190    1411 cni.go:84] Creating CNI manager for ""
	I1002 03:34:59.903210    1411 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:34:59.903215    1411 start_flags.go:321] config:
	{Name:download-only-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-679000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:34:59.909298    1411 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:34:59.913079    1411 out.go:97] Downloading VM boot image ...
	I1002 03:34:59.913122    1411 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I1002 03:35:04.218339    1411 out.go:97] Starting control plane node download-only-679000 in cluster download-only-679000
	I1002 03:35:04.218364    1411 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:35:04.268459    1411 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:35:04.268479    1411 cache.go:57] Caching tarball of preloaded images
	I1002 03:35:04.268615    1411 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:35:04.273281    1411 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 03:35:04.273287    1411 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:04.351307    1411 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:35:09.228255    1411 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:09.228400    1411 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:09.869196    1411 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1002 03:35:09.869391    1411 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/download-only-679000/config.json ...
	I1002 03:35:09.869410    1411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/download-only-679000/config.json: {Name:mk9ffb537985013462866bd2ba05410dfae7c50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:35:09.869638    1411 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:35:09.869891    1411 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1002 03:35:10.271642    1411 out.go:169] 
	W1002 03:35:10.275607    1411 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880] Decompressors:map[bz2:0x14000196a20 gz:0x14000196a28 tar:0x14000196990 tar.bz2:0x140001969a0 tar.gz:0x140001969b0 tar.xz:0x140001969d0 tar.zst:0x140001969e0 tbz2:0x140001969a0 tgz:0x140001969b0 txz:0x140001969d0 tzst:0x140001969e0 xz:0x14000196a30 zip:0x14000196a40 zst:0x14000196a38] Getters:map[file:0x14000708910 http:0x140004a2640 https:0x140004a2690] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1002 03:35:10.275635    1411 out_reason.go:110] 
	W1002 03:35:10.281645    1411 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:35:10.285627    1411 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-679000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (10.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-431000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-431000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.661658208s)

                                                
                                                
-- stdout --
	* [offline-docker-431000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-431000 in cluster offline-docker-431000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:49:32.376083    3134 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:49:32.376221    3134 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:32.376224    3134 out.go:309] Setting ErrFile to fd 2...
	I1002 03:49:32.376226    3134 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:32.376354    3134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:49:32.377577    3134 out.go:303] Setting JSON to false
	I1002 03:49:32.395212    3134 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1146,"bootTime":1696242626,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:49:32.395304    3134 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:49:32.400383    3134 out.go:177] * [offline-docker-431000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:49:32.407455    3134 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:49:32.407517    3134 notify.go:220] Checking for updates...
	I1002 03:49:32.414361    3134 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:49:32.417444    3134 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:49:32.420515    3134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:49:32.423337    3134 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:49:32.426413    3134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:49:32.429722    3134 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:49:32.429771    3134 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:49:32.433363    3134 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:49:32.440307    3134 start.go:298] selected driver: qemu2
	I1002 03:49:32.440318    3134 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:49:32.440325    3134 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:49:32.442263    3134 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:49:32.445302    3134 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:49:32.448559    3134 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:49:32.448593    3134 cni.go:84] Creating CNI manager for ""
	I1002 03:49:32.448602    3134 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:49:32.448607    3134 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:49:32.448613    3134 start_flags.go:321] config:
	{Name:offline-docker-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-431000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:32.453152    3134 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:32.456337    3134 out.go:177] * Starting control plane node offline-docker-431000 in cluster offline-docker-431000
	I1002 03:49:32.460364    3134 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:49:32.460380    3134 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:49:32.460408    3134 cache.go:57] Caching tarball of preloaded images
	I1002 03:49:32.460462    3134 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:49:32.460467    3134 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:49:32.460529    3134 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/offline-docker-431000/config.json ...
	I1002 03:49:32.460539    3134 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/offline-docker-431000/config.json: {Name:mk76e4106b476732167f9f760d1db7870628819a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:49:32.460771    3134 start.go:365] acquiring machines lock for offline-docker-431000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:32.460807    3134 start.go:369] acquired machines lock for "offline-docker-431000" in 21.834µs
	I1002 03:49:32.460816    3134 start.go:93] Provisioning new machine with config: &{Name:offline-docker-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-431000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:49:32.460851    3134 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:49:32.465354    3134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:49:32.480356    3134 start.go:159] libmachine.API.Create for "offline-docker-431000" (driver="qemu2")
	I1002 03:49:32.480382    3134 client.go:168] LocalClient.Create starting
	I1002 03:49:32.480453    3134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:49:32.480482    3134 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:32.480491    3134 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:32.480529    3134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:49:32.480547    3134 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:32.480556    3134 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:32.480868    3134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:49:32.596936    3134 main.go:141] libmachine: Creating SSH key...
	I1002 03:49:32.637457    3134 main.go:141] libmachine: Creating Disk image...
	I1002 03:49:32.637468    3134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:49:32.637657    3134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2
	I1002 03:49:32.646662    3134 main.go:141] libmachine: STDOUT: 
	I1002 03:49:32.646685    3134 main.go:141] libmachine: STDERR: 
	I1002 03:49:32.646749    3134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2 +20000M
	I1002 03:49:32.655042    3134 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:49:32.655060    3134 main.go:141] libmachine: STDERR: 
	I1002 03:49:32.655080    3134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2
	I1002 03:49:32.655089    3134 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:49:32.655127    3134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:27:c4:9e:7b:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2
	I1002 03:49:32.656813    3134 main.go:141] libmachine: STDOUT: 
	I1002 03:49:32.656827    3134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:32.656849    3134 client.go:171] LocalClient.Create took 176.4605ms
	I1002 03:49:34.658193    3134 start.go:128] duration metric: createHost completed in 2.197380042s
	I1002 03:49:34.658214    3134 start.go:83] releasing machines lock for "offline-docker-431000", held for 2.197448917s
	W1002 03:49:34.658226    3134 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:34.666743    3134 out.go:177] * Deleting "offline-docker-431000" in qemu2 ...
	W1002 03:49:34.674161    3134 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:34.674171    3134 start.go:703] Will try again in 5 seconds ...
	I1002 03:49:39.676234    3134 start.go:365] acquiring machines lock for offline-docker-431000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:39.676692    3134 start.go:369] acquired machines lock for "offline-docker-431000" in 345.833µs
	I1002 03:49:39.676867    3134 start.go:93] Provisioning new machine with config: &{Name:offline-docker-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-431000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:49:39.677196    3134 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:49:39.682033    3134 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:49:39.731073    3134 start.go:159] libmachine.API.Create for "offline-docker-431000" (driver="qemu2")
	I1002 03:49:39.731122    3134 client.go:168] LocalClient.Create starting
	I1002 03:49:39.731229    3134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:49:39.731293    3134 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:39.731313    3134 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:39.731373    3134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:49:39.731409    3134 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:39.731422    3134 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:39.731956    3134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:49:39.855803    3134 main.go:141] libmachine: Creating SSH key...
	I1002 03:49:39.952558    3134 main.go:141] libmachine: Creating Disk image...
	I1002 03:49:39.952565    3134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:49:39.952729    3134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2
	I1002 03:49:39.961482    3134 main.go:141] libmachine: STDOUT: 
	I1002 03:49:39.961500    3134 main.go:141] libmachine: STDERR: 
	I1002 03:49:39.961565    3134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2 +20000M
	I1002 03:49:39.969067    3134 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:49:39.969082    3134 main.go:141] libmachine: STDERR: 
	I1002 03:49:39.969099    3134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2
	I1002 03:49:39.969109    3134 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:49:39.969150    3134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:9f:62:f1:49:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/offline-docker-431000/disk.qcow2
	I1002 03:49:39.970684    3134 main.go:141] libmachine: STDOUT: 
	I1002 03:49:39.970698    3134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:39.970711    3134 client.go:171] LocalClient.Create took 239.585834ms
	I1002 03:49:41.972854    3134 start.go:128] duration metric: createHost completed in 2.295678708s
	I1002 03:49:41.972925    3134 start.go:83] releasing machines lock for "offline-docker-431000", held for 2.296240792s
	W1002 03:49:41.973474    3134 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:41.982146    3134 out.go:177] 
	W1002 03:49:41.987171    3134 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:49:41.987196    3134 out.go:239] * 
	* 
	W1002 03:49:41.989614    3134 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:49:41.998223    3134 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-431000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-10-02 03:49:42.011985 -0700 PDT m=+882.419000084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-431000 -n offline-docker-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-431000 -n offline-docker-431000: exit status 7 (66.418083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-431000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-431000
--- FAIL: TestOffline (9.86s)

                                                
                                    
x
+
TestAddons/Setup (18.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-138000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-138000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 90 (18.930295958s)

                                                
                                                
-- stdout --
	* [addons-138000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-138000 in cluster addons-138000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:35:18.705010    1480 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:35:18.705170    1480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:35:18.705173    1480 out.go:309] Setting ErrFile to fd 2...
	I1002 03:35:18.705176    1480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:35:18.705295    1480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:35:18.706287    1480 out.go:303] Setting JSON to false
	I1002 03:35:18.721938    1480 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":291,"bootTime":1696242627,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:35:18.722025    1480 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:35:18.727331    1480 out.go:177] * [addons-138000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:35:18.734332    1480 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:35:18.734379    1480 notify.go:220] Checking for updates...
	I1002 03:35:18.738361    1480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:35:18.745339    1480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:35:18.753307    1480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:35:18.761316    1480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:35:18.769295    1480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:35:18.773502    1480 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:35:18.777292    1480 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:35:18.784382    1480 start.go:298] selected driver: qemu2
	I1002 03:35:18.784388    1480 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:35:18.784393    1480 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:35:18.786842    1480 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:35:18.790276    1480 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:35:18.793384    1480 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:35:18.793408    1480 cni.go:84] Creating CNI manager for ""
	I1002 03:35:18.793415    1480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:35:18.793420    1480 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:35:18.793426    1480 start_flags.go:321] config:
	{Name:addons-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-138000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I1002 03:35:18.798166    1480 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:35:18.806319    1480 out.go:177] * Starting control plane node addons-138000 in cluster addons-138000
	I1002 03:35:18.810284    1480 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:35:18.810299    1480 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:35:18.810312    1480 cache.go:57] Caching tarball of preloaded images
	I1002 03:35:18.810364    1480 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:35:18.810370    1480 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:35:18.810610    1480 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/addons-138000/config.json ...
	I1002 03:35:18.810622    1480 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/addons-138000/config.json: {Name:mk17134049c58cfe1cc5088ec4c8ad2efcb7b192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:35:18.810848    1480 start.go:365] acquiring machines lock for addons-138000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:35:18.810982    1480 start.go:369] acquired machines lock for "addons-138000" in 127.083µs
	I1002 03:35:18.810993    1480 start.go:93] Provisioning new machine with config: &{Name:addons-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-138000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:35:18.811024    1480 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:35:18.815338    1480 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1002 03:35:19.616924    1480 start.go:159] libmachine.API.Create for "addons-138000" (driver="qemu2")
	I1002 03:35:19.616981    1480 client.go:168] LocalClient.Create starting
	I1002 03:35:19.617616    1480 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:35:19.749096    1480 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:35:19.798501    1480 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:35:20.140573    1480 main.go:141] libmachine: Creating SSH key...
	I1002 03:35:20.192801    1480 main.go:141] libmachine: Creating Disk image...
	I1002 03:35:20.192806    1480 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:35:20.193017    1480 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/disk.qcow2
	I1002 03:35:20.308622    1480 main.go:141] libmachine: STDOUT: 
	I1002 03:35:20.308684    1480 main.go:141] libmachine: STDERR: 
	I1002 03:35:20.308765    1480 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/disk.qcow2 +20000M
	I1002 03:35:20.343542    1480 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:35:20.343576    1480 main.go:141] libmachine: STDERR: 
	I1002 03:35:20.343601    1480 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/disk.qcow2
	I1002 03:35:20.343611    1480 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:35:20.343679    1480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:8b:7a:74:5f:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/disk.qcow2
	I1002 03:35:20.403140    1480 main.go:141] libmachine: STDOUT: 
	I1002 03:35:20.403168    1480 main.go:141] libmachine: STDERR: 
	I1002 03:35:20.403173    1480 main.go:141] libmachine: Attempt 0
	I1002 03:35:20.403183    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:22.404446    1480 main.go:141] libmachine: Attempt 1
	I1002 03:35:22.404538    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:24.405787    1480 main.go:141] libmachine: Attempt 2
	I1002 03:35:24.405817    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:26.406913    1480 main.go:141] libmachine: Attempt 3
	I1002 03:35:26.406923    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:28.408005    1480 main.go:141] libmachine: Attempt 4
	I1002 03:35:28.408025    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:30.409145    1480 main.go:141] libmachine: Attempt 5
	I1002 03:35:30.409171    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:32.410312    1480 main.go:141] libmachine: Attempt 6
	I1002 03:35:32.410343    1480 main.go:141] libmachine: Searching for 6:8b:7a:74:5f:e in /var/db/dhcpd_leases ...
	I1002 03:35:32.410454    1480 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1002 03:35:32.410490    1480 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:35:32.410495    1480 main.go:141] libmachine: Found match: 6:8b:7a:74:5f:e
	I1002 03:35:32.410518    1480 main.go:141] libmachine: IP: 192.168.105.2
	I1002 03:35:32.410531    1480 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1002 03:35:34.430905    1480 machine.go:88] provisioning docker machine ...
	I1002 03:35:34.430978    1480 buildroot.go:166] provisioning hostname "addons-138000"
	I1002 03:35:34.432396    1480 main.go:141] libmachine: Using SSH client type: native
	I1002 03:35:34.433318    1480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104478760] 0x10447aed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1002 03:35:34.433346    1480 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-138000 && echo "addons-138000" | sudo tee /etc/hostname
	I1002 03:35:34.531569    1480 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-138000
	
	I1002 03:35:34.531684    1480 main.go:141] libmachine: Using SSH client type: native
	I1002 03:35:34.532204    1480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104478760] 0x10447aed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1002 03:35:34.532220    1480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 03:35:34.609919    1480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 03:35:34.609941    1480 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-994/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-994/.minikube}
	I1002 03:35:34.609956    1480 buildroot.go:174] setting up certificates
	I1002 03:35:34.609995    1480 provision.go:83] configureAuth start
	I1002 03:35:34.610001    1480 provision.go:138] copyHostCerts
	I1002 03:35:34.610164    1480 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem (1123 bytes)
	I1002 03:35:34.610505    1480 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem (1679 bytes)
	I1002 03:35:34.610749    1480 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem (1082 bytes)
	I1002 03:35:34.610883    1480 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem org=jenkins.addons-138000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-138000]
	I1002 03:35:34.871653    1480 provision.go:172] copyRemoteCerts
	I1002 03:35:34.871746    1480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 03:35:34.871761    1480 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/id_rsa Username:docker}
	I1002 03:35:34.907716    1480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 03:35:34.915160    1480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 03:35:34.921960    1480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 03:35:34.928900    1480 provision.go:86] duration metric: configureAuth took 318.891208ms
	I1002 03:35:34.928908    1480 buildroot.go:189] setting minikube options for container-runtime
	I1002 03:35:34.929008    1480 config.go:182] Loaded profile config "addons-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:35:34.929040    1480 main.go:141] libmachine: Using SSH client type: native
	I1002 03:35:34.929261    1480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104478760] 0x10447aed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1002 03:35:34.929266    1480 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 03:35:34.994547    1480 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 03:35:34.994556    1480 buildroot.go:70] root file system type: tmpfs
	I1002 03:35:34.994612    1480 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 03:35:34.994658    1480 main.go:141] libmachine: Using SSH client type: native
	I1002 03:35:34.994901    1480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104478760] 0x10447aed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1002 03:35:34.994935    1480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 03:35:35.066640    1480 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 03:35:35.066692    1480 main.go:141] libmachine: Using SSH client type: native
	I1002 03:35:35.066951    1480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104478760] 0x10447aed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1002 03:35:35.066961    1480 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 03:35:35.417109    1480 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 03:35:35.417126    1480 machine.go:91] provisioned docker machine in 986.160875ms
	I1002 03:35:35.417131    1480 client.go:171] LocalClient.Create took 15.799938792s
	I1002 03:35:35.417143    1480 start.go:167] duration metric: libmachine.API.Create for "addons-138000" took 15.800030375s
	I1002 03:35:35.417148    1480 start.go:300] post-start starting for "addons-138000" (driver="qemu2")
	I1002 03:35:35.417153    1480 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 03:35:35.417214    1480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 03:35:35.417227    1480 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/id_rsa Username:docker}
	I1002 03:35:35.452032    1480 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 03:35:35.453418    1480 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 03:35:35.453424    1480 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/addons for local assets ...
	I1002 03:35:35.453486    1480 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/files for local assets ...
	I1002 03:35:35.453511    1480 start.go:303] post-start completed in 36.359458ms
	I1002 03:35:35.454019    1480 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/addons-138000/config.json ...
	I1002 03:35:35.454182    1480 start.go:128] duration metric: createHost completed in 16.642935875s
	I1002 03:35:35.454218    1480 main.go:141] libmachine: Using SSH client type: native
	I1002 03:35:35.454684    1480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104478760] 0x10447aed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1002 03:35:35.454694    1480 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 03:35:35.521721    1480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696242935.737964502
	
	I1002 03:35:35.521728    1480 fix.go:206] guest clock: 1696242935.737964502
	I1002 03:35:35.521732    1480 fix.go:219] Guest: 2023-10-02 03:35:35.737964502 -0700 PDT Remote: 2023-10-02 03:35:35.454185 -0700 PDT m=+16.766145376 (delta=283.779502ms)
	I1002 03:35:35.521748    1480 fix.go:190] guest clock delta is within tolerance: 283.779502ms
	I1002 03:35:35.521751    1480 start.go:83] releasing machines lock for "addons-138000", held for 16.710546167s
	I1002 03:35:35.522074    1480 ssh_runner.go:195] Run: cat /version.json
	I1002 03:35:35.522078    1480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 03:35:35.522084    1480 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/id_rsa Username:docker}
	I1002 03:35:35.522109    1480 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/addons-138000/id_rsa Username:docker}
	I1002 03:35:35.645281    1480 ssh_runner.go:195] Run: systemctl --version
	I1002 03:35:35.647846    1480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 03:35:35.650077    1480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 03:35:35.650105    1480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 03:35:35.656359    1480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 03:35:35.656373    1480 start.go:469] detecting cgroup driver to use...
	I1002 03:35:35.656512    1480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:35:35.662943    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 03:35:35.666429    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 03:35:35.669634    1480 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 03:35:35.669659    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 03:35:35.672923    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:35:35.676327    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 03:35:35.679598    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:35:35.682440    1480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 03:35:35.685418    1480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 03:35:35.688847    1480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 03:35:35.691928    1480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 03:35:35.694647    1480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:35:35.777740    1480 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 03:35:35.786165    1480 start.go:469] detecting cgroup driver to use...
	I1002 03:35:35.786245    1480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 03:35:35.792666    1480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:35:35.797623    1480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 03:35:35.803603    1480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:35:35.808332    1480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:35:35.813009    1480 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 03:35:35.852989    1480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:35:35.858341    1480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:35:35.863725    1480 ssh_runner.go:195] Run: which cri-dockerd
	I1002 03:35:35.865014    1480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 03:35:35.868098    1480 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 03:35:35.873253    1480 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 03:35:35.951999    1480 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 03:35:36.031780    1480 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 03:35:36.031835    1480 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 03:35:36.036981    1480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:35:36.106886    1480 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 03:35:37.270527    1480 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163608583s)
	I1002 03:35:37.270595    1480 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 03:35:37.335685    1480 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 03:35:37.415183    1480 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 03:35:37.492153    1480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:35:37.571180    1480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 03:35:37.581855    1480 out.go:177] 
	W1002 03:35:37.586897    1480 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1002 03:35:37.586902    1480 out.go:239] * 
	* 
	W1002 03:35:37.587296    1480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:35:37.602811    1480 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:91: out/minikube-darwin-arm64 start -p addons-138000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 90
--- FAIL: TestAddons/Setup (18.93s)

                                                
                                    
x
+
TestCertOptions (10.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-677000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E1002 03:53:07.356760    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-677000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.76659075s)

                                                
                                                
-- stdout --
	* [cert-options-677000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-677000 in cluster cert-options-677000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-677000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-677000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-677000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-677000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (74.985292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-677000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-677000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-677000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-677000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-677000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.020625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-677000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-677000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-677000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-10-02 03:53:14.100014 -0700 PDT m=+1094.511463667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-677000 -n cert-options-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-677000 -n cert-options-677000: exit status 7 (27.696167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-677000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-677000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-677000
--- FAIL: TestCertOptions (10.04s)

                                                
                                    
x
+
TestCertExpiration (198.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-301000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-301000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.998132417s)

                                                
                                                
-- stdout --
	* [cert-expiration-301000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-301000 in cluster cert-expiration-301000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-301000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-301000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-301000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.194485833s)

                                                
                                                
-- stdout --
	* [cert-expiration-301000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-301000 in cluster cert-expiration-301000
	* Restarting existing qemu2 VM for "cert-expiration-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-301000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-301000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-301000 in cluster cert-expiration-301000
	* Restarting existing qemu2 VM for "cert-expiration-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-10-02 03:56:05.100556 -0700 PDT m=+1265.515608126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-301000 -n cert-expiration-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-301000 -n cert-expiration-301000: exit status 7 (64.009917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-301000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-301000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-301000
--- FAIL: TestCertExpiration (198.37s)

                                                
                                    
x
+
TestDockerFlags (10s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-144000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-144000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.747881666s)

                                                
                                                
-- stdout --
	* [docker-flags-144000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-144000 in cluster docker-flags-144000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-144000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:52:54.209926    3570 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:52:54.210073    3570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:54.210076    3570 out.go:309] Setting ErrFile to fd 2...
	I1002 03:52:54.210079    3570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:54.210252    3570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:52:54.211272    3570 out.go:303] Setting JSON to false
	I1002 03:52:54.227176    3570 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1348,"bootTime":1696242626,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:52:54.227274    3570 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:52:54.231344    3570 out.go:177] * [docker-flags-144000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:52:54.237391    3570 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:52:54.237466    3570 notify.go:220] Checking for updates...
	I1002 03:52:54.241283    3570 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:52:54.244319    3570 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:52:54.247374    3570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:52:54.250327    3570 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:52:54.253367    3570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:52:54.256633    3570 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:52:54.256697    3570 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:52:54.256751    3570 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:52:54.261331    3570 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:52:54.268331    3570 start.go:298] selected driver: qemu2
	I1002 03:52:54.268340    3570 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:52:54.268347    3570 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:52:54.270624    3570 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:52:54.274239    3570 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:52:54.277419    3570 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1002 03:52:54.277452    3570 cni.go:84] Creating CNI manager for ""
	I1002 03:52:54.277462    3570 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:52:54.277466    3570 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:52:54.277473    3570 start_flags.go:321] config:
	{Name:docker-flags-144000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-144000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:52:54.281909    3570 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:52:54.287295    3570 out.go:177] * Starting control plane node docker-flags-144000 in cluster docker-flags-144000
	I1002 03:52:54.291328    3570 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:52:54.291343    3570 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:52:54.291354    3570 cache.go:57] Caching tarball of preloaded images
	I1002 03:52:54.291406    3570 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:52:54.291411    3570 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:52:54.291487    3570 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/docker-flags-144000/config.json ...
	I1002 03:52:54.291499    3570 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/docker-flags-144000/config.json: {Name:mkf9c07f2d38d9deed184d33a33550c8f90f8090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:52:54.291704    3570 start.go:365] acquiring machines lock for docker-flags-144000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:52:54.291734    3570 start.go:369] acquired machines lock for "docker-flags-144000" in 24.875µs
	I1002 03:52:54.291744    3570 start.go:93] Provisioning new machine with config: &{Name:docker-flags-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-144000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:52:54.291774    3570 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:52:54.300314    3570 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:52:54.317036    3570 start.go:159] libmachine.API.Create for "docker-flags-144000" (driver="qemu2")
	I1002 03:52:54.317057    3570 client.go:168] LocalClient.Create starting
	I1002 03:52:54.317123    3570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:52:54.317148    3570 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:54.317159    3570 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:54.317202    3570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:52:54.317220    3570 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:54.317229    3570 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:54.317561    3570 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:52:54.430171    3570 main.go:141] libmachine: Creating SSH key...
	I1002 03:52:54.561262    3570 main.go:141] libmachine: Creating Disk image...
	I1002 03:52:54.561269    3570 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:52:54.561455    3570 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2
	I1002 03:52:54.570720    3570 main.go:141] libmachine: STDOUT: 
	I1002 03:52:54.570735    3570 main.go:141] libmachine: STDERR: 
	I1002 03:52:54.570789    3570 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2 +20000M
	I1002 03:52:54.578312    3570 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:52:54.578326    3570 main.go:141] libmachine: STDERR: 
	I1002 03:52:54.578350    3570 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2
	I1002 03:52:54.578356    3570 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:52:54.578389    3570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:4e:99:f3:d7:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2
	I1002 03:52:54.580026    3570 main.go:141] libmachine: STDOUT: 
	I1002 03:52:54.580041    3570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:52:54.580070    3570 client.go:171] LocalClient.Create took 263.004917ms
	I1002 03:52:56.582248    3570 start.go:128] duration metric: createHost completed in 2.290495s
	I1002 03:52:56.582322    3570 start.go:83] releasing machines lock for "docker-flags-144000", held for 2.290623s
	W1002 03:52:56.582373    3570 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:56.600539    3570 out.go:177] * Deleting "docker-flags-144000" in qemu2 ...
	W1002 03:52:56.615295    3570 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:56.615320    3570 start.go:703] Will try again in 5 seconds ...
	I1002 03:53:01.617557    3570 start.go:365] acquiring machines lock for docker-flags-144000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:01.618034    3570 start.go:369] acquired machines lock for "docker-flags-144000" in 344.791µs
	I1002 03:53:01.618162    3570 start.go:93] Provisioning new machine with config: &{Name:docker-flags-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-144000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:01.618455    3570 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:01.627118    3570 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:53:01.675283    3570 start.go:159] libmachine.API.Create for "docker-flags-144000" (driver="qemu2")
	I1002 03:53:01.675336    3570 client.go:168] LocalClient.Create starting
	I1002 03:53:01.675439    3570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:01.675487    3570 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:01.675505    3570 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:01.675570    3570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:01.675597    3570 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:01.675616    3570 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:01.676112    3570 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:01.799577    3570 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:01.872041    3570 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:01.872049    3570 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:01.872250    3570 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2
	I1002 03:53:01.881291    3570 main.go:141] libmachine: STDOUT: 
	I1002 03:53:01.881308    3570 main.go:141] libmachine: STDERR: 
	I1002 03:53:01.881379    3570 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2 +20000M
	I1002 03:53:01.888908    3570 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:01.888922    3570 main.go:141] libmachine: STDERR: 
	I1002 03:53:01.888937    3570 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2
	I1002 03:53:01.888948    3570 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:01.888994    3570 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:d7:59:f5:93:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/docker-flags-144000/disk.qcow2
	I1002 03:53:01.890651    3570 main.go:141] libmachine: STDOUT: 
	I1002 03:53:01.890664    3570 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:01.890676    3570 client.go:171] LocalClient.Create took 215.340458ms
	I1002 03:53:03.892769    3570 start.go:128] duration metric: createHost completed in 2.274320917s
	I1002 03:53:03.892811    3570 start.go:83] releasing machines lock for "docker-flags-144000", held for 2.274802583s
	W1002 03:53:03.893086    3570 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:03.901528    3570 out.go:177] 
	W1002 03:53:03.905695    3570 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:53:03.905710    3570 out.go:239] * 
	* 
	W1002 03:53:03.907115    3570 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:53:03.917451    3570 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-144000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (84.858875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-144000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-144000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-144000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-144000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-144000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.611333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-144000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-144000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-144000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-144000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-10-02 03:53:04.063641 -0700 PDT m=+1084.474885209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-144000 -n docker-flags-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-144000 -n docker-flags-144000: exit status 7 (28.113708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-144000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-144000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-144000
--- FAIL: TestDockerFlags (10.00s)

                                                
                                    
x
+
TestForceSystemdFlag (9.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-340000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-340000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.784192708s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-340000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-340000 in cluster force-systemd-flag-340000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-340000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:52:23.517604    3422 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:52:23.517747    3422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:23.517750    3422 out.go:309] Setting ErrFile to fd 2...
	I1002 03:52:23.517752    3422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:23.517876    3422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:52:23.518942    3422 out.go:303] Setting JSON to false
	I1002 03:52:23.534788    3422 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1317,"bootTime":1696242626,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:52:23.534875    3422 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:52:23.539621    3422 out.go:177] * [force-systemd-flag-340000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:52:23.546664    3422 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:52:23.546727    3422 notify.go:220] Checking for updates...
	I1002 03:52:23.554558    3422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:52:23.558630    3422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:52:23.565556    3422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:52:23.568629    3422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:52:23.571537    3422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:52:23.574959    3422 config.go:182] Loaded profile config "NoKubernetes-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:52:23.575026    3422 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:52:23.575068    3422 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:52:23.579554    3422 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:52:23.586640    3422 start.go:298] selected driver: qemu2
	I1002 03:52:23.586647    3422 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:52:23.586655    3422 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:52:23.589069    3422 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:52:23.592619    3422 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:52:23.595648    3422 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:52:23.595665    3422 cni.go:84] Creating CNI manager for ""
	I1002 03:52:23.595674    3422 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:52:23.595678    3422 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:52:23.595685    3422 start_flags.go:321] config:
	{Name:force-systemd-flag-340000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:52:23.600171    3422 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:52:23.607630    3422 out.go:177] * Starting control plane node force-systemd-flag-340000 in cluster force-systemd-flag-340000
	I1002 03:52:23.611580    3422 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:52:23.611596    3422 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:52:23.611612    3422 cache.go:57] Caching tarball of preloaded images
	I1002 03:52:23.611676    3422 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:52:23.611682    3422 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:52:23.611751    3422 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/force-systemd-flag-340000/config.json ...
	I1002 03:52:23.611767    3422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/force-systemd-flag-340000/config.json: {Name:mk485c3d10672a6a5d01f29bf7c4f57d3878228a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:52:23.611981    3422 start.go:365] acquiring machines lock for force-systemd-flag-340000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:52:23.612015    3422 start.go:369] acquired machines lock for "force-systemd-flag-340000" in 25.667µs
	I1002 03:52:23.612026    3422 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:52:23.612059    3422 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:52:23.620600    3422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:52:23.638567    3422 start.go:159] libmachine.API.Create for "force-systemd-flag-340000" (driver="qemu2")
	I1002 03:52:23.638591    3422 client.go:168] LocalClient.Create starting
	I1002 03:52:23.638650    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:52:23.638676    3422 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:23.638685    3422 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:23.638751    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:52:23.638769    3422 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:23.638777    3422 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:23.639104    3422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:52:23.749809    3422 main.go:141] libmachine: Creating SSH key...
	I1002 03:52:23.863698    3422 main.go:141] libmachine: Creating Disk image...
	I1002 03:52:23.863707    3422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:52:23.863897    3422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2
	I1002 03:52:23.872664    3422 main.go:141] libmachine: STDOUT: 
	I1002 03:52:23.872681    3422 main.go:141] libmachine: STDERR: 
	I1002 03:52:23.872735    3422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2 +20000M
	I1002 03:52:23.880217    3422 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:52:23.880246    3422 main.go:141] libmachine: STDERR: 
	I1002 03:52:23.880268    3422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2
	I1002 03:52:23.880273    3422 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:52:23.880314    3422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:8e:6b:81:d9:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2
	I1002 03:52:23.881920    3422 main.go:141] libmachine: STDOUT: 
	I1002 03:52:23.881934    3422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:52:23.881953    3422 client.go:171] LocalClient.Create took 243.361709ms
	I1002 03:52:25.884145    3422 start.go:128] duration metric: createHost completed in 2.272113708s
	I1002 03:52:25.884204    3422 start.go:83] releasing machines lock for "force-systemd-flag-340000", held for 2.2722265s
	W1002 03:52:25.884302    3422 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:25.899367    3422 out.go:177] * Deleting "force-systemd-flag-340000" in qemu2 ...
	W1002 03:52:25.913823    3422 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:25.913852    3422 start.go:703] Will try again in 5 seconds ...
	I1002 03:52:30.915971    3422 start.go:365] acquiring machines lock for force-systemd-flag-340000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:52:30.916350    3422 start.go:369] acquired machines lock for "force-systemd-flag-340000" in 267.625µs
	I1002 03:52:30.916499    3422 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-340000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:52:30.916728    3422 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:52:30.926262    3422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:52:30.973964    3422 start.go:159] libmachine.API.Create for "force-systemd-flag-340000" (driver="qemu2")
	I1002 03:52:30.974006    3422 client.go:168] LocalClient.Create starting
	I1002 03:52:30.974124    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:52:30.974166    3422 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:30.974184    3422 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:30.974248    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:52:30.974275    3422 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:30.974288    3422 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:30.974785    3422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:52:31.098075    3422 main.go:141] libmachine: Creating SSH key...
	I1002 03:52:31.214620    3422 main.go:141] libmachine: Creating Disk image...
	I1002 03:52:31.214626    3422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:52:31.214816    3422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2
	I1002 03:52:31.224168    3422 main.go:141] libmachine: STDOUT: 
	I1002 03:52:31.224187    3422 main.go:141] libmachine: STDERR: 
	I1002 03:52:31.224244    3422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2 +20000M
	I1002 03:52:31.231655    3422 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:52:31.231669    3422 main.go:141] libmachine: STDERR: 
	I1002 03:52:31.231680    3422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2
	I1002 03:52:31.231687    3422 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:52:31.231733    3422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:19:c1:91:39:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-flag-340000/disk.qcow2
	I1002 03:52:31.233390    3422 main.go:141] libmachine: STDOUT: 
	I1002 03:52:31.233406    3422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:52:31.233417    3422 client.go:171] LocalClient.Create took 259.409708ms
	I1002 03:52:33.235549    3422 start.go:128] duration metric: createHost completed in 2.3188365s
	I1002 03:52:33.235657    3422 start.go:83] releasing machines lock for "force-systemd-flag-340000", held for 2.319332583s
	W1002 03:52:33.236073    3422 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-340000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-340000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:33.247661    3422 out.go:177] 
	W1002 03:52:33.249270    3422 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:52:33.249302    3422 out.go:239] * 
	* 
	W1002 03:52:33.251992    3422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:52:33.262643    3422 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-340000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-340000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-340000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (71.517708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-340000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-340000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-10-02 03:52:33.351734 -0700 PDT m=+1053.762355084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-340000 -n force-systemd-flag-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-340000 -n force-systemd-flag-340000: exit status 7 (33.620459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-340000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-340000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-340000
--- FAIL: TestForceSystemdFlag (9.99s)

                                                
                                    
x
+
TestForceSystemdEnv (9.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-691000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-691000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.725988041s)

                                                
                                                
-- stdout --
	* [force-systemd-env-691000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-691000 in cluster force-systemd-env-691000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-691000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:52:44.278755    3524 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:52:44.278922    3524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:44.278925    3524 out.go:309] Setting ErrFile to fd 2...
	I1002 03:52:44.278928    3524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:44.279060    3524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:52:44.280076    3524 out.go:303] Setting JSON to false
	I1002 03:52:44.295769    3524 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1338,"bootTime":1696242626,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:52:44.295865    3524 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:52:44.301177    3524 out.go:177] * [force-systemd-env-691000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:52:44.308224    3524 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:52:44.313226    3524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:52:44.308294    3524 notify.go:220] Checking for updates...
	I1002 03:52:44.319182    3524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:52:44.322167    3524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:52:44.325191    3524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:52:44.328084    3524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 03:52:44.331521    3524 config.go:182] Loaded profile config "NoKubernetes-264000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1002 03:52:44.331589    3524 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:52:44.331631    3524 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:52:44.336170    3524 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:52:44.343205    3524 start.go:298] selected driver: qemu2
	I1002 03:52:44.343215    3524 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:52:44.343222    3524 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:52:44.345574    3524 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:52:44.349241    3524 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:52:44.350686    3524 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:52:44.350703    3524 cni.go:84] Creating CNI manager for ""
	I1002 03:52:44.350712    3524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:52:44.350718    3524 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:52:44.350723    3524 start_flags.go:321] config:
	{Name:force-systemd-env-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:52:44.355151    3524 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:52:44.362215    3524 out.go:177] * Starting control plane node force-systemd-env-691000 in cluster force-systemd-env-691000
	I1002 03:52:44.366185    3524 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:52:44.366199    3524 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:52:44.366207    3524 cache.go:57] Caching tarball of preloaded images
	I1002 03:52:44.366259    3524 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:52:44.366265    3524 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:52:44.366323    3524 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/force-systemd-env-691000/config.json ...
	I1002 03:52:44.366335    3524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/force-systemd-env-691000/config.json: {Name:mke2dab5434720e70817829f68738931597c183c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:52:44.366550    3524 start.go:365] acquiring machines lock for force-systemd-env-691000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:52:44.366585    3524 start.go:369] acquired machines lock for "force-systemd-env-691000" in 25.583µs
	I1002 03:52:44.366596    3524 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:52:44.366626    3524 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:52:44.375098    3524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:52:44.392299    3524 start.go:159] libmachine.API.Create for "force-systemd-env-691000" (driver="qemu2")
	I1002 03:52:44.392333    3524 client.go:168] LocalClient.Create starting
	I1002 03:52:44.392386    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:52:44.392416    3524 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:44.392429    3524 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:44.392465    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:52:44.392486    3524 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:44.392494    3524 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:44.392854    3524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:52:44.505576    3524 main.go:141] libmachine: Creating SSH key...
	I1002 03:52:44.609260    3524 main.go:141] libmachine: Creating Disk image...
	I1002 03:52:44.609272    3524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:52:44.609442    3524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2
	I1002 03:52:44.618496    3524 main.go:141] libmachine: STDOUT: 
	I1002 03:52:44.618514    3524 main.go:141] libmachine: STDERR: 
	I1002 03:52:44.618584    3524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2 +20000M
	I1002 03:52:44.626138    3524 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:52:44.626152    3524 main.go:141] libmachine: STDERR: 
	I1002 03:52:44.626174    3524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2
	I1002 03:52:44.626179    3524 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:52:44.626216    3524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e9:49:7e:83:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2
	I1002 03:52:44.627836    3524 main.go:141] libmachine: STDOUT: 
	I1002 03:52:44.627849    3524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:52:44.627864    3524 client.go:171] LocalClient.Create took 235.532042ms
	I1002 03:52:46.630003    3524 start.go:128] duration metric: createHost completed in 2.263402708s
	I1002 03:52:46.630071    3524 start.go:83] releasing machines lock for "force-systemd-env-691000", held for 2.263523333s
	W1002 03:52:46.630121    3524 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:46.652573    3524 out.go:177] * Deleting "force-systemd-env-691000" in qemu2 ...
	W1002 03:52:46.679713    3524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:46.679743    3524 start.go:703] Will try again in 5 seconds ...
	I1002 03:52:51.681945    3524 start.go:365] acquiring machines lock for force-systemd-env-691000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:52:51.682392    3524 start.go:369] acquired machines lock for "force-systemd-env-691000" in 329.792µs
	I1002 03:52:51.682566    3524 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-691000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:52:51.682824    3524 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:52:51.687837    3524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1002 03:52:51.735945    3524 start.go:159] libmachine.API.Create for "force-systemd-env-691000" (driver="qemu2")
	I1002 03:52:51.735989    3524 client.go:168] LocalClient.Create starting
	I1002 03:52:51.736100    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:52:51.736148    3524 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:51.736171    3524 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:51.736233    3524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:52:51.736267    3524 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:51.736300    3524 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:51.736786    3524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:52:51.858671    3524 main.go:141] libmachine: Creating SSH key...
	I1002 03:52:51.915103    3524 main.go:141] libmachine: Creating Disk image...
	I1002 03:52:51.915109    3524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:52:51.915291    3524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2
	I1002 03:52:51.924260    3524 main.go:141] libmachine: STDOUT: 
	I1002 03:52:51.924275    3524 main.go:141] libmachine: STDERR: 
	I1002 03:52:51.924335    3524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2 +20000M
	I1002 03:52:51.931908    3524 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:52:51.931920    3524 main.go:141] libmachine: STDERR: 
	I1002 03:52:51.931936    3524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2
	I1002 03:52:51.931942    3524 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:52:51.931981    3524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a8:27:db:62:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/force-systemd-env-691000/disk.qcow2
	I1002 03:52:51.933625    3524 main.go:141] libmachine: STDOUT: 
	I1002 03:52:51.933637    3524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:52:51.933648    3524 client.go:171] LocalClient.Create took 197.656459ms
	I1002 03:52:53.935800    3524 start.go:128] duration metric: createHost completed in 2.252989666s
	I1002 03:52:53.935871    3524 start.go:83] releasing machines lock for "force-systemd-env-691000", held for 2.253496292s
	W1002 03:52:53.936355    3524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:53.945909    3524 out.go:177] 
	W1002 03:52:53.950991    3524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:52:53.951035    3524 out.go:239] * 
	* 
	W1002 03:52:53.953564    3524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:52:53.966951    3524 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-691000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-691000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-691000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.198417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-691000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-691000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-10-02 03:52:54.057406 -0700 PDT m=+1074.468449959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-691000 -n force-systemd-env-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-691000 -n force-systemd-env-691000: exit status 7 (33.158375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-691000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-691000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-691000
--- FAIL: TestForceSystemdEnv (9.93s)

                                                
                                    
x
+
TestErrorSpam/setup (17.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-755000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-755000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 --driver=qemu2 : exit status 90 (17.548287459s)

                                                
                                                
-- stdout --
	* [nospam-755000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node nospam-755000 in cluster nospam-755000
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-755000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 --driver=qemu2 " failed: exit status 90
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job failed. See \"journalctl -xe\" for details."
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-755000] minikube v1.31.2 on Darwin 14.0 (arm64)
- MINIKUBE_LOCATION=17340
- KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node nospam-755000 in cluster nospam-755000
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (17.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-680000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-680000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-7gk66" [14b33be8-83df-463d-bed0-39fd241cb0d7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-7gk66" [14b33be8-83df-463d-bed0-39fd241cb0d7] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.010461292s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31424
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31424: Get "http://192.168.105.4:31424": dial tcp 192.168.105.4:31424: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-680000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-7gk66
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-680000/192.168.105.4
Start Time:       Mon, 02 Oct 2023 03:40:25 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://9e031d5c8e220f3f9262c03be3da9e3e03d19c13e3f8a9303ba89b997037aa1f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 02 Oct 2023 03:40:39 -0700
Finished:     Mon, 02 Oct 2023 03:40:39 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2f4pb (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-2f4pb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-7gk66 to functional-680000
Normal   Pulled     14s (x3 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    14s (x3 over 28s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 28s)  kubelet            Started container echoserver-arm
Warning  BackOff    2s (x3 over 26s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-7gk66_default(14b33be8-83df-463d-bed0-39fd241cb0d7)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-680000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-680000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.194.81
IPs:                      10.102.194.81
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31424/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-680000 -n functional-680000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-680000 ssh cat                                                                                           | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | /mount-9p/test-1696243241145095000                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh stat                                                                                          | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh stat                                                                                          | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh sudo                                                                                          | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-680000                                                                                                | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port531353657/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh -- ls                                                                                         | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh sudo                                                                                          | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount   | -p functional-680000                                                                                                | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount1  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-680000                                                                                                | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount3  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount   | -p functional-680000                                                                                                | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount2  |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|         | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-680000 ssh findmnt                                                                                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|         | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:39:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:39:18.892814    1809 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:39:18.892951    1809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:39:18.892953    1809 out.go:309] Setting ErrFile to fd 2...
	I1002 03:39:18.892955    1809 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:39:18.893074    1809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:39:18.894073    1809 out.go:303] Setting JSON to false
	I1002 03:39:18.910252    1809 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":531,"bootTime":1696242627,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:39:18.910340    1809 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:39:18.914970    1809 out.go:177] * [functional-680000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:39:18.922931    1809 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:39:18.926888    1809 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:39:18.923005    1809 notify.go:220] Checking for updates...
	I1002 03:39:18.932914    1809 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:39:18.939877    1809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:39:18.942903    1809 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:39:18.944061    1809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:39:18.947178    1809 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:39:18.947222    1809 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:39:18.951876    1809 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:39:18.956889    1809 start.go:298] selected driver: qemu2
	I1002 03:39:18.956892    1809 start.go:902] validating driver "qemu2" against &{Name:functional-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:39:18.956933    1809 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:39:18.959161    1809 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:39:18.959180    1809 cni.go:84] Creating CNI manager for ""
	I1002 03:39:18.959187    1809 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:39:18.959192    1809 start_flags.go:321] config:
	{Name:functional-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-680000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:39:18.963349    1809 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:39:18.971897    1809 out.go:177] * Starting control plane node functional-680000 in cluster functional-680000
	I1002 03:39:18.976019    1809 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:39:18.976032    1809 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:39:18.976046    1809 cache.go:57] Caching tarball of preloaded images
	I1002 03:39:18.976113    1809 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:39:18.976118    1809 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:39:18.976176    1809 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/config.json ...
	I1002 03:39:18.976583    1809 start.go:365] acquiring machines lock for functional-680000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:39:18.976608    1809 start.go:369] acquired machines lock for "functional-680000" in 21.041µs
	I1002 03:39:18.976614    1809 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:39:18.976617    1809 fix.go:54] fixHost starting: 
	I1002 03:39:18.977161    1809 fix.go:102] recreateIfNeeded on functional-680000: state=Running err=<nil>
	W1002 03:39:18.977168    1809 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:39:18.985898    1809 out.go:177] * Updating the running qemu2 "functional-680000" VM ...
	I1002 03:39:18.989943    1809 machine.go:88] provisioning docker machine ...
	I1002 03:39:18.989951    1809 buildroot.go:166] provisioning hostname "functional-680000"
	I1002 03:39:18.989989    1809 main.go:141] libmachine: Using SSH client type: native
	I1002 03:39:18.990239    1809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1002 03:39:18.990243    1809 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-680000 && echo "functional-680000" | sudo tee /etc/hostname
	I1002 03:39:19.045455    1809 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-680000
	
	I1002 03:39:19.045506    1809 main.go:141] libmachine: Using SSH client type: native
	I1002 03:39:19.045746    1809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1002 03:39:19.045753    1809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-680000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-680000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-680000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 03:39:19.096421    1809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 03:39:19.096428    1809 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-994/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-994/.minikube}
	I1002 03:39:19.096433    1809 buildroot.go:174] setting up certificates
	I1002 03:39:19.096440    1809 provision.go:83] configureAuth start
	I1002 03:39:19.096442    1809 provision.go:138] copyHostCerts
	I1002 03:39:19.096509    1809 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem, removing ...
	I1002 03:39:19.096513    1809 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem
	I1002 03:39:19.096625    1809 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem (1679 bytes)
	I1002 03:39:19.096794    1809 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem, removing ...
	I1002 03:39:19.096796    1809 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem
	I1002 03:39:19.096838    1809 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem (1082 bytes)
	I1002 03:39:19.096926    1809 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem, removing ...
	I1002 03:39:19.096928    1809 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem
	I1002 03:39:19.097078    1809 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem (1123 bytes)
	I1002 03:39:19.097181    1809 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem org=jenkins.functional-680000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-680000]
	I1002 03:39:19.212805    1809 provision.go:172] copyRemoteCerts
	I1002 03:39:19.212843    1809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 03:39:19.212850    1809 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
	I1002 03:39:19.240526    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 03:39:19.247078    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 03:39:19.254005    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 03:39:19.261026    1809 provision.go:86] duration metric: configureAuth took 164.5795ms
	I1002 03:39:19.261032    1809 buildroot.go:189] setting minikube options for container-runtime
	I1002 03:39:19.261135    1809 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:39:19.261181    1809 main.go:141] libmachine: Using SSH client type: native
	I1002 03:39:19.261391    1809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1002 03:39:19.261395    1809 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 03:39:19.314626    1809 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 03:39:19.314631    1809 buildroot.go:70] root file system type: tmpfs
	I1002 03:39:19.314685    1809 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 03:39:19.314732    1809 main.go:141] libmachine: Using SSH client type: native
	I1002 03:39:19.314956    1809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1002 03:39:19.314993    1809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 03:39:19.368318    1809 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 03:39:19.368358    1809 main.go:141] libmachine: Using SSH client type: native
	I1002 03:39:19.368585    1809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1002 03:39:19.368592    1809 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 03:39:19.419140    1809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 03:39:19.419147    1809 machine.go:91] provisioned docker machine in 429.195667ms
	I1002 03:39:19.419150    1809 start.go:300] post-start starting for "functional-680000" (driver="qemu2")
	I1002 03:39:19.419157    1809 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 03:39:19.419197    1809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 03:39:19.419204    1809 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
	I1002 03:39:19.446071    1809 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 03:39:19.447466    1809 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 03:39:19.447470    1809 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/addons for local assets ...
	I1002 03:39:19.447544    1809 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/files for local assets ...
	I1002 03:39:19.447629    1809 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem -> 14092.pem in /etc/ssl/certs
	I1002 03:39:19.447716    1809 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/test/nested/copy/1409/hosts -> hosts in /etc/test/nested/copy/1409
	I1002 03:39:19.447746    1809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1409
	I1002 03:39:19.450331    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem --> /etc/ssl/certs/14092.pem (1708 bytes)
	I1002 03:39:19.457744    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/test/nested/copy/1409/hosts --> /etc/test/nested/copy/1409/hosts (40 bytes)
	I1002 03:39:19.464222    1809 start.go:303] post-start completed in 45.065458ms
	I1002 03:39:19.464226    1809 fix.go:56] fixHost completed within 487.604375ms
	I1002 03:39:19.464259    1809 main.go:141] libmachine: Using SSH client type: native
	I1002 03:39:19.464495    1809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102894760] 0x102896ed0 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1002 03:39:19.464499    1809 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 03:39:19.516799    1809 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696243159.428426813
	
	I1002 03:39:19.516804    1809 fix.go:206] guest clock: 1696243159.428426813
	I1002 03:39:19.516807    1809 fix.go:219] Guest: 2023-10-02 03:39:19.428426813 -0700 PDT Remote: 2023-10-02 03:39:19.464227 -0700 PDT m=+0.590008792 (delta=-35.800187ms)
	I1002 03:39:19.516815    1809 fix.go:190] guest clock delta is within tolerance: -35.800187ms
	I1002 03:39:19.516817    1809 start.go:83] releasing machines lock for "functional-680000", held for 540.200167ms
	I1002 03:39:19.517108    1809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 03:39:19.517109    1809 ssh_runner.go:195] Run: cat /version.json
	I1002 03:39:19.517115    1809 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
	I1002 03:39:19.517127    1809 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
	I1002 03:39:19.544832    1809 ssh_runner.go:195] Run: systemctl --version
	I1002 03:39:19.583638    1809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 03:39:19.585225    1809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 03:39:19.585249    1809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 03:39:19.587922    1809 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 03:39:19.587926    1809 start.go:469] detecting cgroup driver to use...
	I1002 03:39:19.587982    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:39:19.593396    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 03:39:19.597073    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 03:39:19.600340    1809 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 03:39:19.600366    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 03:39:19.603545    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:39:19.606655    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 03:39:19.609839    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:39:19.612863    1809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 03:39:19.616164    1809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 03:39:19.619116    1809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 03:39:19.622076    1809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 03:39:19.625574    1809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:39:19.725944    1809 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 03:39:19.731797    1809 start.go:469] detecting cgroup driver to use...
	I1002 03:39:19.731839    1809 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 03:39:19.738957    1809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:39:19.747484    1809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 03:39:19.753258    1809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:39:19.757791    1809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:39:19.762799    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:39:19.768051    1809 ssh_runner.go:195] Run: which cri-dockerd
	I1002 03:39:19.769478    1809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 03:39:19.772101    1809 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 03:39:19.777522    1809 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 03:39:19.880991    1809 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 03:39:19.984901    1809 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 03:39:19.984954    1809 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 03:39:19.990491    1809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:39:20.095905    1809 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 03:39:31.287065    1809 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.336788667s)
	I1002 03:39:31.287127    1809 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 03:39:31.367558    1809 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 03:39:31.453019    1809 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 03:39:31.536017    1809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:39:31.615713    1809 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 03:39:31.627155    1809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:39:31.706852    1809 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 03:39:31.734488    1809 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 03:39:31.734545    1809 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 03:39:31.736716    1809 start.go:537] Will wait 60s for crictl version
	I1002 03:39:31.736752    1809 ssh_runner.go:195] Run: which crictl
	I1002 03:39:31.738300    1809 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 03:39:31.753862    1809 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 03:39:31.753921    1809 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 03:39:31.766028    1809 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 03:39:31.783566    1809 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 03:39:31.783695    1809 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1002 03:39:31.789557    1809 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 03:39:31.792565    1809 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:39:31.792623    1809 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 03:39:31.798444    1809 docker.go:664] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-680000
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 03:39:31.798455    1809 docker.go:594] Images already preloaded, skipping extraction
	I1002 03:39:31.798499    1809 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 03:39:31.803955    1809 docker.go:664] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-680000
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1002 03:39:31.803961    1809 cache_images.go:84] Images are preloaded, skipping loading
	I1002 03:39:31.804009    1809 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 03:39:31.811442    1809 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 03:39:31.811465    1809 cni.go:84] Creating CNI manager for ""
	I1002 03:39:31.811470    1809 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:39:31.811474    1809 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 03:39:31.811482    1809 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-680000 NodeName:functional-680000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 03:39:31.811550    1809 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-680000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 03:39:31.811587    1809 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-680000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:functional-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1002 03:39:31.811639    1809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 03:39:31.814463    1809 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 03:39:31.814486    1809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 03:39:31.817556    1809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1002 03:39:31.822630    1809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 03:39:31.829562    1809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I1002 03:39:31.837659    1809 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1002 03:39:31.841169    1809 certs.go:56] Setting up /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000 for IP: 192.168.105.4
	I1002 03:39:31.841180    1809 certs.go:190] acquiring lock for shared ca certs: {Name:mkb95ac88d0fec37f1e658f6bb500deee9ee7493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:39:31.841311    1809 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key
	I1002 03:39:31.841344    1809 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key
	I1002 03:39:31.841394    1809 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.key
	I1002 03:39:31.841431    1809 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/apiserver.key.942c473b
	I1002 03:39:31.841464    1809 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/proxy-client.key
	I1002 03:39:31.841591    1809 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem (1338 bytes)
	W1002 03:39:31.841612    1809 certs.go:433] ignoring /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409_empty.pem, impossibly tiny 0 bytes
	I1002 03:39:31.841616    1809 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 03:39:31.841635    1809 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem (1082 bytes)
	I1002 03:39:31.841651    1809 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem (1123 bytes)
	I1002 03:39:31.841666    1809 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem (1679 bytes)
	I1002 03:39:31.841701    1809 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem (1708 bytes)
	I1002 03:39:31.842055    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 03:39:31.851435    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 03:39:31.860754    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 03:39:31.867960    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 03:39:31.874474    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 03:39:31.881803    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 03:39:31.889143    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 03:39:31.896421    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 03:39:31.903003    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem --> /usr/share/ca-certificates/1409.pem (1338 bytes)
	I1002 03:39:31.909838    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem --> /usr/share/ca-certificates/14092.pem (1708 bytes)
	I1002 03:39:31.917162    1809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 03:39:31.924005    1809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 03:39:31.928802    1809 ssh_runner.go:195] Run: openssl version
	I1002 03:39:31.930516    1809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1409.pem && ln -fs /usr/share/ca-certificates/1409.pem /etc/ssl/certs/1409.pem"
	I1002 03:39:31.934095    1809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1409.pem
	I1002 03:39:31.935734    1809 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:37 /usr/share/ca-certificates/1409.pem
	I1002 03:39:31.935756    1809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1409.pem
	I1002 03:39:31.937529    1809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1409.pem /etc/ssl/certs/51391683.0"
	I1002 03:39:31.940298    1809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14092.pem && ln -fs /usr/share/ca-certificates/14092.pem /etc/ssl/certs/14092.pem"
	I1002 03:39:31.943178    1809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14092.pem
	I1002 03:39:31.944461    1809 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:37 /usr/share/ca-certificates/14092.pem
	I1002 03:39:31.944477    1809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14092.pem
	I1002 03:39:31.946434    1809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14092.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 03:39:31.949611    1809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 03:39:31.952747    1809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:39:31.954264    1809 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:39:31.954282    1809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:39:31.956020    1809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 03:39:31.958723    1809 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 03:39:31.960111    1809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 03:39:31.961804    1809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 03:39:31.963611    1809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 03:39:31.965429    1809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 03:39:31.967117    1809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 03:39:31.968990    1809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 03:39:31.970848    1809 kubeadm.go:404] StartCluster: {Name:functional-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.2 ClusterName:functional-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:39:31.970921    1809 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 03:39:31.976751    1809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 03:39:31.979651    1809 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 03:39:31.979658    1809 kubeadm.go:636] restartCluster start
	I1002 03:39:31.979679    1809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 03:39:31.982238    1809 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 03:39:31.982518    1809 kubeconfig.go:92] found "functional-680000" server: "https://192.168.105.4:8441"
	I1002 03:39:31.983257    1809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 03:39:31.986131    1809 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1002 03:39:31.986135    1809 kubeadm.go:1128] stopping kube-system containers ...
	I1002 03:39:31.986171    1809 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 03:39:31.993496    1809 docker.go:463] Stopping containers: [bf89d6de785e 64ee5271293d 1b8f6950e0c6 cea09229f801 ce874a57d19f 00476c0e79d9 d24fe041dd9d 84dd3b897833 6e6c5316e884 a5f277727dc4 7324c81aebec 59f5bcb4d953 78254289c1c9 96c3e7902ad5 caacc1fbe3b2 f518425b5420 e8b242169ed2 90f566698bfd 2c64785b4db2 f5552c57a4a3 402911cc0464 3f3c0d7479db b1ea10a914df 63608807ae8e 3398d3d4b136 7688c05d5ca2 ed95474b4792 5245b8fbc7cd]
	I1002 03:39:31.993548    1809 ssh_runner.go:195] Run: docker stop bf89d6de785e 64ee5271293d 1b8f6950e0c6 cea09229f801 ce874a57d19f 00476c0e79d9 d24fe041dd9d 84dd3b897833 6e6c5316e884 a5f277727dc4 7324c81aebec 59f5bcb4d953 78254289c1c9 96c3e7902ad5 caacc1fbe3b2 f518425b5420 e8b242169ed2 90f566698bfd 2c64785b4db2 f5552c57a4a3 402911cc0464 3f3c0d7479db b1ea10a914df 63608807ae8e 3398d3d4b136 7688c05d5ca2 ed95474b4792 5245b8fbc7cd
	I1002 03:39:32.000288    1809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 03:39:32.102426    1809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 03:39:32.107075    1809 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct  2 10:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  2 10:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  2 10:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Oct  2 10:38 /etc/kubernetes/scheduler.conf
	
	I1002 03:39:32.107108    1809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 03:39:32.110690    1809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 03:39:32.114146    1809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 03:39:32.117245    1809 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 03:39:32.117268    1809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 03:39:32.120453    1809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 03:39:32.123257    1809 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 03:39:32.123281    1809 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 03:39:32.126115    1809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 03:39:32.129412    1809 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 03:39:32.129415    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 03:39:32.149929    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 03:39:32.590059    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 03:39:32.706916    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 03:39:32.733669    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 03:39:32.775246    1809 api_server.go:52] waiting for apiserver process to appear ...
	I1002 03:39:32.775307    1809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:39:32.783577    1809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:39:33.289920    1809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:39:33.789820    1809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:39:33.794383    1809 api_server.go:72] duration metric: took 1.019158167s to wait for apiserver process to appear ...
	I1002 03:39:33.794388    1809 api_server.go:88] waiting for apiserver healthz status ...
	I1002 03:39:33.794395    1809 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1002 03:39:35.454176    1809 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 03:39:35.454185    1809 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 03:39:35.454189    1809 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1002 03:39:35.493633    1809 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 03:39:35.493643    1809 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 03:39:35.995684    1809 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1002 03:39:35.998997    1809 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 03:39:35.999003    1809 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 03:39:36.495691    1809 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1002 03:39:36.499085    1809 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 03:39:36.499090    1809 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 03:39:36.995660    1809 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1002 03:39:36.999048    1809 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1002 03:39:37.004102    1809 api_server.go:141] control plane version: v1.28.2
	I1002 03:39:37.004107    1809 api_server.go:131] duration metric: took 3.209782917s to wait for apiserver health ...
	I1002 03:39:37.004110    1809 cni.go:84] Creating CNI manager for ""
	I1002 03:39:37.004115    1809 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:39:37.008381    1809 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 03:39:37.011262    1809 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 03:39:37.014414    1809 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 03:39:37.019003    1809 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 03:39:37.023582    1809 system_pods.go:59] 7 kube-system pods found
	I1002 03:39:37.023589    1809 system_pods.go:61] "coredns-5dd5756b68-sf85r" [ad56743f-f408-4f21-8c86-ad0ef0155606] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 03:39:37.023592    1809 system_pods.go:61] "etcd-functional-680000" [625ca71c-46bd-4713-ab1f-17022fadf78e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 03:39:37.023596    1809 system_pods.go:61] "kube-apiserver-functional-680000" [ee7b993d-d12b-4264-9890-de7b415dfcb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 03:39:37.023599    1809 system_pods.go:61] "kube-controller-manager-functional-680000" [9a0f17ff-0816-4138-91bc-e64f58ddd806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 03:39:37.023601    1809 system_pods.go:61] "kube-proxy-t44dw" [6e4e08b4-53af-4978-bd1f-278eb2b69695] Running
	I1002 03:39:37.023603    1809 system_pods.go:61] "kube-scheduler-functional-680000" [b01d5d5f-be83-4805-9e8f-a5561f158333] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 03:39:37.023605    1809 system_pods.go:61] "storage-provisioner" [8446c9ef-c480-496c-81b0-1bbda663315f] Running
	I1002 03:39:37.023606    1809 system_pods.go:74] duration metric: took 4.600333ms to wait for pod list to return data ...
	I1002 03:39:37.023609    1809 node_conditions.go:102] verifying NodePressure condition ...
	I1002 03:39:37.025128    1809 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1002 03:39:37.025134    1809 node_conditions.go:123] node cpu capacity is 2
	I1002 03:39:37.025138    1809 node_conditions.go:105] duration metric: took 1.527917ms to run NodePressure ...
	I1002 03:39:37.025145    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 03:39:37.086980    1809 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 03:39:37.089181    1809 kubeadm.go:787] kubelet initialised
	I1002 03:39:37.089185    1809 kubeadm.go:788] duration metric: took 2.198417ms waiting for restarted kubelet to initialise ...
	I1002 03:39:37.089188    1809 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 03:39:37.091985    1809 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sf85r" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:39.101481    1809 pod_ready.go:92] pod "coredns-5dd5756b68-sf85r" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:39.101486    1809 pod_ready.go:81] duration metric: took 2.009539208s waiting for pod "coredns-5dd5756b68-sf85r" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:39.101491    1809 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:41.110964    1809 pod_ready.go:102] pod "etcd-functional-680000" in "kube-system" namespace has status "Ready":"False"
	I1002 03:39:43.111088    1809 pod_ready.go:102] pod "etcd-functional-680000" in "kube-system" namespace has status "Ready":"False"
	I1002 03:39:45.610873    1809 pod_ready.go:102] pod "etcd-functional-680000" in "kube-system" namespace has status "Ready":"False"
	I1002 03:39:47.617039    1809 pod_ready.go:102] pod "etcd-functional-680000" in "kube-system" namespace has status "Ready":"False"
	I1002 03:39:49.110793    1809 pod_ready.go:92] pod "etcd-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:49.110800    1809 pod_ready.go:81] duration metric: took 10.009517583s waiting for pod "etcd-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:49.110804    1809 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:49.113796    1809 pod_ready.go:92] pod "kube-apiserver-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:49.113802    1809 pod_ready.go:81] duration metric: took 2.995292ms waiting for pod "kube-apiserver-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:49.113806    1809 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:51.123978    1809 pod_ready.go:102] pod "kube-controller-manager-functional-680000" in "kube-system" namespace has status "Ready":"False"
	I1002 03:39:53.124340    1809 pod_ready.go:92] pod "kube-controller-manager-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:53.124348    1809 pod_ready.go:81] duration metric: took 4.010623167s waiting for pod "kube-controller-manager-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.124353    1809 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t44dw" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.126556    1809 pod_ready.go:92] pod "kube-proxy-t44dw" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:53.126558    1809 pod_ready.go:81] duration metric: took 2.202708ms waiting for pod "kube-proxy-t44dw" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.126561    1809 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.128747    1809 pod_ready.go:92] pod "kube-scheduler-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:53.128749    1809 pod_ready.go:81] duration metric: took 2.185542ms waiting for pod "kube-scheduler-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.128753    1809 pod_ready.go:38] duration metric: took 16.039897958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 03:39:53.128761    1809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 03:39:53.132419    1809 ops.go:34] apiserver oom_adj: -16
	I1002 03:39:53.132422    1809 kubeadm.go:640] restartCluster took 21.15320525s
	I1002 03:39:53.132425    1809 kubeadm.go:406] StartCluster complete in 21.162025708s
	I1002 03:39:53.132432    1809 settings.go:142] acquiring lock: {Name:mk3f5122457e6ee64cf5dd538efdbb968ff53214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:39:53.132515    1809 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:39:53.132856    1809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/kubeconfig: {Name:mkba984fcf92a3f610125e890c28c2ff94eec9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:39:53.133051    1809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 03:39:53.133090    1809 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 03:39:53.133122    1809 addons.go:69] Setting storage-provisioner=true in profile "functional-680000"
	I1002 03:39:53.133127    1809 addons.go:231] Setting addon storage-provisioner=true in "functional-680000"
	W1002 03:39:53.133130    1809 addons.go:240] addon storage-provisioner should already be in state true
	I1002 03:39:53.133146    1809 host.go:66] Checking if "functional-680000" exists ...
	I1002 03:39:53.133154    1809 addons.go:69] Setting default-storageclass=true in profile "functional-680000"
	I1002 03:39:53.133161    1809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-680000"
	I1002 03:39:53.133185    1809 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W1002 03:39:53.133373    1809 host.go:54] host status for "functional-680000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/monitor: connect: connection refused
	W1002 03:39:53.133380    1809 addons.go:277] "functional-680000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I1002 03:39:53.134141    1809 addons.go:231] Setting addon default-storageclass=true in "functional-680000"
	W1002 03:39:53.134144    1809 addons.go:240] addon default-storageclass should already be in state true
	I1002 03:39:53.134151    1809 host.go:66] Checking if "functional-680000" exists ...
	I1002 03:39:53.134742    1809 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 03:39:53.134746    1809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 03:39:53.134751    1809 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
	I1002 03:39:53.135112    1809 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-680000" context rescaled to 1 replicas
	I1002 03:39:53.135120    1809 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:39:53.143050    1809 out.go:177] * Verifying Kubernetes components...
	I1002 03:39:53.147146    1809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:39:53.168234    1809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 03:39:53.180766    1809 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 03:39:53.180791    1809 node_ready.go:35] waiting up to 6m0s for node "functional-680000" to be "Ready" ...
	I1002 03:39:53.181992    1809 node_ready.go:49] node "functional-680000" has status "Ready":"True"
	I1002 03:39:53.182003    1809 node_ready.go:38] duration metric: took 1.199917ms waiting for node "functional-680000" to be "Ready" ...
	I1002 03:39:53.182006    1809 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 03:39:53.184683    1809 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sf85r" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.423933    1809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 03:39:53.427776    1809 addons.go:502] enable addons completed in 294.693542ms: enabled=[storage-provisioner default-storageclass]
	I1002 03:39:53.510800    1809 pod_ready.go:92] pod "coredns-5dd5756b68-sf85r" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:53.510804    1809 pod_ready.go:81] duration metric: took 326.124375ms waiting for pod "coredns-5dd5756b68-sf85r" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.510808    1809 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.911272    1809 pod_ready.go:92] pod "etcd-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:53.911278    1809 pod_ready.go:81] duration metric: took 400.474584ms waiting for pod "etcd-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:53.911281    1809 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:54.310857    1809 pod_ready.go:92] pod "kube-apiserver-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:54.310864    1809 pod_ready.go:81] duration metric: took 399.588416ms waiting for pod "kube-apiserver-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:54.310869    1809 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:54.710875    1809 pod_ready.go:92] pod "kube-controller-manager-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:54.710881    1809 pod_ready.go:81] duration metric: took 400.018125ms waiting for pod "kube-controller-manager-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:54.710885    1809 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t44dw" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:55.111228    1809 pod_ready.go:92] pod "kube-proxy-t44dw" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:55.111235    1809 pod_ready.go:81] duration metric: took 400.356125ms waiting for pod "kube-proxy-t44dw" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:55.111240    1809 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:55.510890    1809 pod_ready.go:92] pod "kube-scheduler-functional-680000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:39:55.510897    1809 pod_ready.go:81] duration metric: took 399.663334ms waiting for pod "kube-scheduler-functional-680000" in "kube-system" namespace to be "Ready" ...
	I1002 03:39:55.510901    1809 pod_ready.go:38] duration metric: took 2.328939917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 03:39:55.510911    1809 api_server.go:52] waiting for apiserver process to appear ...
	I1002 03:39:55.510992    1809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:39:55.515553    1809 api_server.go:72] duration metric: took 2.380476458s to wait for apiserver process to appear ...
	I1002 03:39:55.515556    1809 api_server.go:88] waiting for apiserver healthz status ...
	I1002 03:39:55.515561    1809 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1002 03:39:55.518599    1809 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1002 03:39:55.519294    1809 api_server.go:141] control plane version: v1.28.2
	I1002 03:39:55.519299    1809 api_server.go:131] duration metric: took 3.740791ms to wait for apiserver health ...
	I1002 03:39:55.519301    1809 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 03:39:55.712655    1809 system_pods.go:59] 7 kube-system pods found
	I1002 03:39:55.712662    1809 system_pods.go:61] "coredns-5dd5756b68-sf85r" [ad56743f-f408-4f21-8c86-ad0ef0155606] Running
	I1002 03:39:55.712664    1809 system_pods.go:61] "etcd-functional-680000" [625ca71c-46bd-4713-ab1f-17022fadf78e] Running
	I1002 03:39:55.712666    1809 system_pods.go:61] "kube-apiserver-functional-680000" [ee7b993d-d12b-4264-9890-de7b415dfcb9] Running
	I1002 03:39:55.712668    1809 system_pods.go:61] "kube-controller-manager-functional-680000" [9a0f17ff-0816-4138-91bc-e64f58ddd806] Running
	I1002 03:39:55.712670    1809 system_pods.go:61] "kube-proxy-t44dw" [6e4e08b4-53af-4978-bd1f-278eb2b69695] Running
	I1002 03:39:55.712671    1809 system_pods.go:61] "kube-scheduler-functional-680000" [b01d5d5f-be83-4805-9e8f-a5561f158333] Running
	I1002 03:39:55.712673    1809 system_pods.go:61] "storage-provisioner" [8446c9ef-c480-496c-81b0-1bbda663315f] Running
	I1002 03:39:55.712676    1809 system_pods.go:74] duration metric: took 193.376417ms to wait for pod list to return data ...
	I1002 03:39:55.712679    1809 default_sa.go:34] waiting for default service account to be created ...
	I1002 03:39:55.910756    1809 default_sa.go:45] found service account: "default"
	I1002 03:39:55.910762    1809 default_sa.go:55] duration metric: took 198.085458ms for default service account to be created ...
	I1002 03:39:55.910765    1809 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 03:39:56.112849    1809 system_pods.go:86] 7 kube-system pods found
	I1002 03:39:56.112856    1809 system_pods.go:89] "coredns-5dd5756b68-sf85r" [ad56743f-f408-4f21-8c86-ad0ef0155606] Running
	I1002 03:39:56.112858    1809 system_pods.go:89] "etcd-functional-680000" [625ca71c-46bd-4713-ab1f-17022fadf78e] Running
	I1002 03:39:56.112860    1809 system_pods.go:89] "kube-apiserver-functional-680000" [ee7b993d-d12b-4264-9890-de7b415dfcb9] Running
	I1002 03:39:56.112862    1809 system_pods.go:89] "kube-controller-manager-functional-680000" [9a0f17ff-0816-4138-91bc-e64f58ddd806] Running
	I1002 03:39:56.112864    1809 system_pods.go:89] "kube-proxy-t44dw" [6e4e08b4-53af-4978-bd1f-278eb2b69695] Running
	I1002 03:39:56.112865    1809 system_pods.go:89] "kube-scheduler-functional-680000" [b01d5d5f-be83-4805-9e8f-a5561f158333] Running
	I1002 03:39:56.112867    1809 system_pods.go:89] "storage-provisioner" [8446c9ef-c480-496c-81b0-1bbda663315f] Running
	I1002 03:39:56.112869    1809 system_pods.go:126] duration metric: took 202.10625ms to wait for k8s-apps to be running ...
	I1002 03:39:56.112871    1809 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 03:39:56.112941    1809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:39:56.118094    1809 system_svc.go:56] duration metric: took 5.219166ms WaitForService to wait for kubelet.
	I1002 03:39:56.118099    1809 kubeadm.go:581] duration metric: took 2.983035708s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 03:39:56.118107    1809 node_conditions.go:102] verifying NodePressure condition ...
	I1002 03:39:56.310874    1809 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1002 03:39:56.310881    1809 node_conditions.go:123] node cpu capacity is 2
	I1002 03:39:56.310886    1809 node_conditions.go:105] duration metric: took 192.780916ms to run NodePressure ...
	I1002 03:39:56.310891    1809 start.go:228] waiting for startup goroutines ...
	I1002 03:39:56.310894    1809 start.go:233] waiting for cluster config update ...
	I1002 03:39:56.310898    1809 start.go:242] writing updated cluster config ...
	I1002 03:39:56.311198    1809 ssh_runner.go:195] Run: rm -f paused
	I1002 03:39:56.341337    1809 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I1002 03:39:56.345209    1809 out.go:177] * Done! kubectl is now configured to use "functional-680000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 10:38:05 UTC, ends at Mon 2023-10-02 10:40:54 UTC. --
	Oct 02 10:40:42 functional-680000 dockerd[6611]: time="2023-10-02T10:40:42.610459133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:40:42 functional-680000 dockerd[6611]: time="2023-10-02T10:40:42.610469258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:40:42 functional-680000 dockerd[6611]: time="2023-10-02T10:40:42.610475300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:40:42 functional-680000 cri-dockerd[6867]: time="2023-10-02T10:40:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e9267839300f8d1cc9b5b6cfbb004849d6cb9e737a9d04b7d3d57fa0b47d7fb4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 02 10:40:43 functional-680000 cri-dockerd[6867]: time="2023-10-02T10:40:43Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Oct 02 10:40:43 functional-680000 dockerd[6611]: time="2023-10-02T10:40:43.945224659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:40:43 functional-680000 dockerd[6611]: time="2023-10-02T10:40:43.945257158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:40:43 functional-680000 dockerd[6611]: time="2023-10-02T10:40:43.945267325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:40:43 functional-680000 dockerd[6611]: time="2023-10-02T10:40:43.945273867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:40:43 functional-680000 dockerd[6604]: time="2023-10-02T10:40:43.999851432Z" level=info msg="ignoring event" container=68b37f0d213a32c0caa8136827fba8c605d56ca0c2b086acd6fb3ee4ecbde39f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.000218553Z" level=info msg="shim disconnected" id=68b37f0d213a32c0caa8136827fba8c605d56ca0c2b086acd6fb3ee4ecbde39f namespace=moby
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.000249636Z" level=warning msg="cleaning up after shim disconnected" id=68b37f0d213a32c0caa8136827fba8c605d56ca0c2b086acd6fb3ee4ecbde39f namespace=moby
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.000254220Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.860340616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.860382574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.860398074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.860409199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:40:44 functional-680000 dockerd[6604]: time="2023-10-02T10:40:44.893160178Z" level=info msg="ignoring event" container=8a8167ac1d7786041e1f286e782a6b3cfe7aa79aa10d8952bbb89e58582bea3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.893700589Z" level=info msg="shim disconnected" id=8a8167ac1d7786041e1f286e782a6b3cfe7aa79aa10d8952bbb89e58582bea3d namespace=moby
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.893792463Z" level=warning msg="cleaning up after shim disconnected" id=8a8167ac1d7786041e1f286e782a6b3cfe7aa79aa10d8952bbb89e58582bea3d namespace=moby
	Oct 02 10:40:44 functional-680000 dockerd[6611]: time="2023-10-02T10:40:44.893802796Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 10:40:45 functional-680000 dockerd[6604]: time="2023-10-02T10:40:45.463775832Z" level=info msg="ignoring event" container=e9267839300f8d1cc9b5b6cfbb004849d6cb9e737a9d04b7d3d57fa0b47d7fb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:40:45 functional-680000 dockerd[6611]: time="2023-10-02T10:40:45.464122661Z" level=info msg="shim disconnected" id=e9267839300f8d1cc9b5b6cfbb004849d6cb9e737a9d04b7d3d57fa0b47d7fb4 namespace=moby
	Oct 02 10:40:45 functional-680000 dockerd[6611]: time="2023-10-02T10:40:45.464154744Z" level=warning msg="cleaning up after shim disconnected" id=e9267839300f8d1cc9b5b6cfbb004849d6cb9e737a9d04b7d3d57fa0b47d7fb4 namespace=moby
	Oct 02 10:40:45 functional-680000 dockerd[6611]: time="2023-10-02T10:40:45.464159161Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8a8167ac1d778       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            3                   5df31c1608a8f       hello-node-759d89bdcc-9jzx9
	68b37f0d213a3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   11 seconds ago       Exited              mount-munger              0                   e9267839300f8       busybox-mount
	9e031d5c8e220       72565bf5bbedf                                                                                         15 seconds ago       Exited              echoserver-arm            2                   c0d73cc448af9       hello-node-connect-7799dfb7c6-7gk66
	e0ba03078cb46       nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755                         21 seconds ago       Running             myfrontend                0                   514016deb850d       sp-pod
	e0ba2f78186cf       nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                         37 seconds ago       Running             nginx                     0                   a72ffe47c8f97       nginx-svc
	acdddf11d17bc       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   587aef351fc12       coredns-5dd5756b68-sf85r
	ce6849f1995e5       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   c33d86066e086       storage-provisioner
	a1e458344b22e       7da62c127fc0f                                                                                         About a minute ago   Running             kube-proxy                2                   55bd6b97366a5       kube-proxy-t44dw
	e0a8398cac646       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   c333345a48731       etcd-functional-680000
	66f81211f5cc8       64fc40cee3716                                                                                         About a minute ago   Running             kube-scheduler            2                   0ba532f0083d8       kube-scheduler-functional-680000
	edcf72eda722e       89d57b83c1786                                                                                         About a minute ago   Running             kube-controller-manager   2                   f2137d8230fd5       kube-controller-manager-functional-680000
	8a1caa97e2b74       30bb499447fe1                                                                                         About a minute ago   Running             kube-apiserver            0                   0d697f37e0b7d       kube-apiserver-functional-680000
	bf89d6de785e1       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   84dd3b8978339       coredns-5dd5756b68-sf85r
	64ee5271293d1       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   59f5bcb4d9535       storage-provisioner
	1b8f6950e0c66       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   6e6c5316e8842       etcd-functional-680000
	cea09229f801f       7da62c127fc0f                                                                                         2 minutes ago        Exited              kube-proxy                1                   7324c81aebec9       kube-proxy-t44dw
	00476c0e79d9a       64fc40cee3716                                                                                         2 minutes ago        Exited              kube-scheduler            1                   96c3e7902ad59       kube-scheduler-functional-680000
	d24fe041dd9d4       89d57b83c1786                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   78254289c1c9f       kube-controller-manager-functional-680000
	
	* 
	* ==> coredns [acdddf11d17b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60855 - 34238 "HINFO IN 4473120086244738652.8297677282849765909. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004246229s
	[INFO] 10.244.0.1:32775 - 30966 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000106123s
	[INFO] 10.244.0.1:5296 - 59638 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000123748s
	[INFO] 10.244.0.1:36730 - 61950 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000033667s
	[INFO] 10.244.0.1:26342 - 16671 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.0010814s
	[INFO] 10.244.0.1:19520 - 6304 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000097248s
	[INFO] 10.244.0.1:4171 - 59720 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000133748s
	
	* 
	* ==> coredns [bf89d6de785e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44470 - 47885 "HINFO IN 1545438384991966939.308987214920707155. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004300101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-680000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-680000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=functional-680000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T03_38_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:38:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-680000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:40:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:40:36 +0000   Mon, 02 Oct 2023 10:38:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:40:36 +0000   Mon, 02 Oct 2023 10:38:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:40:36 +0000   Mon, 02 Oct 2023 10:38:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:40:36 +0000   Mon, 02 Oct 2023 10:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-680000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9f5cd4e090843f1b12b561e09725eea
	  System UUID:                f9f5cd4e090843f1b12b561e09725eea
	  Boot ID:                    0cfe0d56-a47e-47b1-9dbc-9515f4d53e5f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-9jzx9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     hello-node-connect-7799dfb7c6-7gk66          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 coredns-5dd5756b68-sf85r                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m19s
	  kube-system                 etcd-functional-680000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-apiserver-functional-680000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-functional-680000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-proxy-t44dw                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-scheduler-functional-680000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  Starting                 77s                    kube-proxy       
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node functional-680000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node functional-680000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m37s (x7 over 2m37s)  kubelet          Node functional-680000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m32s                  kubelet          Node functional-680000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m32s                  kubelet          Node functional-680000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m32s                  kubelet          Node functional-680000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m29s                  kubelet          Node functional-680000 status is now: NodeReady
	  Normal  RegisteredNode           2m19s                  node-controller  Node functional-680000 event: Registered Node functional-680000 in Controller
	  Normal  RegisteredNode           104s                   node-controller  Node functional-680000 event: Registered Node functional-680000 in Controller
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)      kubelet          Node functional-680000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)      kubelet          Node functional-680000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node functional-680000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                    node-controller  Node functional-680000 event: Registered Node functional-680000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.054282] systemd-fstab-generator[3697]: Ignoring "noauto" for root device
	[  +0.160414] systemd-fstab-generator[3739]: Ignoring "noauto" for root device
	[  +0.096808] systemd-fstab-generator[3750]: Ignoring "noauto" for root device
	[  +0.113814] systemd-fstab-generator[3763]: Ignoring "noauto" for root device
	[  +5.135250] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.196809] systemd-fstab-generator[4322]: Ignoring "noauto" for root device
	[  +0.090017] systemd-fstab-generator[4333]: Ignoring "noauto" for root device
	[  +0.083493] systemd-fstab-generator[4344]: Ignoring "noauto" for root device
	[  +0.078762] systemd-fstab-generator[4355]: Ignoring "noauto" for root device
	[  +0.086129] systemd-fstab-generator[4425]: Ignoring "noauto" for root device
	[  +4.626808] kauditd_printk_skb: 34 callbacks suppressed
	[Oct 2 10:39] systemd-fstab-generator[6137]: Ignoring "noauto" for root device
	[  +0.154124] systemd-fstab-generator[6171]: Ignoring "noauto" for root device
	[  +0.098871] systemd-fstab-generator[6182]: Ignoring "noauto" for root device
	[  +0.116667] systemd-fstab-generator[6195]: Ignoring "noauto" for root device
	[ +11.433998] systemd-fstab-generator[6755]: Ignoring "noauto" for root device
	[  +0.086770] systemd-fstab-generator[6766]: Ignoring "noauto" for root device
	[  +0.081260] systemd-fstab-generator[6777]: Ignoring "noauto" for root device
	[  +0.078464] systemd-fstab-generator[6788]: Ignoring "noauto" for root device
	[  +0.092739] systemd-fstab-generator[6860]: Ignoring "noauto" for root device
	[  +0.993167] systemd-fstab-generator[7117]: Ignoring "noauto" for root device
	[  +3.592420] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 2 10:40] kauditd_printk_skb: 9 callbacks suppressed
	[  +0.834372] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +22.001507] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [1b8f6950e0c6] <==
	* {"level":"info","ts":"2023-10-02T10:38:55.37875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:38:56.443354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T10:38:56.443659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T10:38:56.443938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-10-02T10:38:56.443987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T10:38:56.444008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-02T10:38:56.444178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T10:38:56.444327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-02T10:38:56.446727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:38:56.447131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:38:56.449801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-10-02T10:38:56.449844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T10:38:56.446738Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-680000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T10:38:56.449986Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T10:38:56.450894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T10:39:20.025271Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-02T10:39:20.025316Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-680000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-10-02T10:39:20.025371Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T10:39:20.025422Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T10:39:20.031355Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T10:39:20.031376Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-02T10:39:20.031433Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-10-02T10:39:20.033117Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-02T10:39:20.033156Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-02T10:39:20.033163Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-680000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [e0a8398cac64] <==
	* {"level":"info","ts":"2023-10-02T10:39:33.796779Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T10:39:33.795519Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T10:39:33.796665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-10-02T10:39:33.803238Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-10-02T10:39:33.803292Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:39:33.803324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:39:33.796697Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-02T10:39:33.807183Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-02T10:39:33.797228Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T10:39:33.803203Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T10:39:33.807235Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T10:39:34.884929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-02T10:39:34.885069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-02T10:39:34.885114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-02T10:39:34.885204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-10-02T10:39:34.885257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-10-02T10:39:34.885368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-10-02T10:39:34.885411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-10-02T10:39:34.890104Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-680000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T10:39:34.890196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:39:34.890821Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T10:39:34.891021Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T10:39:34.891293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:39:34.892473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T10:39:34.893361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	* 
	* ==> kernel <==
	*  10:40:54 up 2 min,  0 users,  load average: 0.48, 0.28, 0.11
	Linux functional-680000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8a1caa97e2b7] <==
	* I1002 10:39:35.582742       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 10:39:35.582869       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 10:39:35.583288       1 aggregator.go:166] initial CRD sync complete...
	I1002 10:39:35.583316       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 10:39:35.583333       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 10:39:35.583347       1 cache.go:39] Caches are synced for autoregister controller
	I1002 10:39:35.583986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 10:39:35.584565       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 10:39:35.584594       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 10:39:35.584661       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1002 10:39:35.585648       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 10:39:35.641401       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 10:39:36.482964       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 10:39:37.116165       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 10:39:37.119238       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 10:39:37.131213       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 10:39:37.138410       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 10:39:37.140395       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 10:39:47.802964       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 10:39:47.852566       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 10:39:57.866880       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.156.3"}
	I1002 10:40:03.396822       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 10:40:03.452149       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.22.115"}
	I1002 10:40:13.701798       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.98.69"}
	I1002 10:40:25.210955       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.194.81"}
	
	* 
	* ==> kube-controller-manager [d24fe041dd9d] <==
	* I1002 10:39:10.130403       1 shared_informer.go:318] Caches are synced for stateful set
	I1002 10:39:10.136759       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-680000\" does not exist"
	I1002 10:39:10.182371       1 shared_informer.go:318] Caches are synced for taint
	I1002 10:39:10.182408       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1002 10:39:10.182434       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-680000"
	I1002 10:39:10.182452       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1002 10:39:10.182462       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1002 10:39:10.182475       1 taint_manager.go:211] "Sending events to api server"
	I1002 10:39:10.182615       1 event.go:307] "Event occurred" object="functional-680000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-680000 event: Registered Node functional-680000 in Controller"
	I1002 10:39:10.183752       1 shared_informer.go:318] Caches are synced for GC
	I1002 10:39:10.184200       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 10:39:10.192652       1 shared_informer.go:318] Caches are synced for TTL
	I1002 10:39:10.194803       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1002 10:39:10.195814       1 shared_informer.go:318] Caches are synced for persistent volume
	I1002 10:39:10.235581       1 shared_informer.go:318] Caches are synced for node
	I1002 10:39:10.235620       1 range_allocator.go:174] "Sending events to api server"
	I1002 10:39:10.235632       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1002 10:39:10.235633       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1002 10:39:10.235636       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1002 10:39:10.280216       1 shared_informer.go:318] Caches are synced for attach detach
	I1002 10:39:10.287153       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 10:39:10.337123       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 10:39:10.653856       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 10:39:10.730858       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 10:39:10.730875       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [edcf72eda722] <==
	* I1002 10:39:48.340321       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 10:40:03.399723       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I1002 10:40:03.406781       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-9jzx9"
	I1002 10:40:03.410536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="10.695389ms"
	I1002 10:40:03.414823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="4.262765ms"
	I1002 10:40:03.414859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="21.374µs"
	I1002 10:40:03.444076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="102.789µs"
	I1002 10:40:09.100237       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="18.708µs"
	I1002 10:40:10.103762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="82.789µs"
	I1002 10:40:11.112997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="38.833µs"
	I1002 10:40:20.072628       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1002 10:40:22.256791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="81.123µs"
	I1002 10:40:25.166748       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I1002 10:40:25.169748       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-7gk66"
	I1002 10:40:25.173655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="6.847561ms"
	I1002 10:40:25.176816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="3.115827ms"
	I1002 10:40:25.176842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="11.583µs"
	I1002 10:40:25.183261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.042µs"
	I1002 10:40:26.284651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="110.373µs"
	I1002 10:40:27.290819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.749µs"
	I1002 10:40:33.845765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="28.083µs"
	I1002 10:40:39.850877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="28µs"
	I1002 10:40:40.367050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="30.249µs"
	I1002 10:40:45.404142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="29.124µs"
	I1002 10:40:51.846190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="28.749µs"
	
	* 
	* ==> kube-proxy [a1e458344b22] <==
	* I1002 10:39:36.373827       1 server_others.go:69] "Using iptables proxy"
	I1002 10:39:36.382044       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1002 10:39:36.404216       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 10:39:36.404233       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 10:39:36.404930       1 server_others.go:152] "Using iptables Proxier"
	I1002 10:39:36.404949       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 10:39:36.405016       1 server.go:846] "Version info" version="v1.28.2"
	I1002 10:39:36.405020       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:39:36.406077       1 config.go:188] "Starting service config controller"
	I1002 10:39:36.406086       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 10:39:36.406094       1 config.go:97] "Starting endpoint slice config controller"
	I1002 10:39:36.406097       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 10:39:36.406393       1 config.go:315] "Starting node config controller"
	I1002 10:39:36.406395       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 10:39:36.506906       1 shared_informer.go:318] Caches are synced for node config
	I1002 10:39:36.506998       1 shared_informer.go:318] Caches are synced for service config
	I1002 10:39:36.507038       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [cea09229f801] <==
	* I1002 10:38:55.702341       1 server_others.go:69] "Using iptables proxy"
	I1002 10:38:57.124230       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1002 10:38:57.138449       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 10:38:57.138464       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 10:38:57.139152       1 server_others.go:152] "Using iptables Proxier"
	I1002 10:38:57.139208       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 10:38:57.139286       1 server.go:846] "Version info" version="v1.28.2"
	I1002 10:38:57.139291       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:38:57.139563       1 config.go:188] "Starting service config controller"
	I1002 10:38:57.139576       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 10:38:57.139584       1 config.go:97] "Starting endpoint slice config controller"
	I1002 10:38:57.139586       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 10:38:57.139773       1 config.go:315] "Starting node config controller"
	I1002 10:38:57.139776       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 10:38:57.239920       1 shared_informer.go:318] Caches are synced for node config
	I1002 10:38:57.239941       1 shared_informer.go:318] Caches are synced for service config
	I1002 10:38:57.239952       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [00476c0e79d9] <==
	* I1002 10:38:55.202836       1 serving.go:348] Generated self-signed cert in-memory
	W1002 10:38:57.074742       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 10:38:57.074856       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 10:38:57.074882       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 10:38:57.074900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 10:38:57.107366       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 10:38:57.107493       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:38:57.108661       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 10:38:57.108706       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 10:38:57.108907       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:38:57.108722       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 10:38:57.209371       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:39:20.062842       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1002 10:39:20.062895       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1002 10:39:20.062940       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 10:39:20.063016       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [66f81211f5cc] <==
	* I1002 10:39:34.043832       1 serving.go:348] Generated self-signed cert in-memory
	W1002 10:39:35.523407       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 10:39:35.523422       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 10:39:35.523436       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 10:39:35.523450       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 10:39:35.552694       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 10:39:35.555673       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:39:35.556560       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 10:39:35.556615       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 10:39:35.556621       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:39:35.556627       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 10:39:35.657622       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 10:38:05 UTC, ends at Mon 2023-10-02 10:40:54 UTC. --
	Oct 02 10:40:32 functional-680000 kubelet[7123]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 10:40:32 functional-680000 kubelet[7123]: I1002 10:40:32.907988    7123 scope.go:117] "RemoveContainer" containerID="ce874a57d19f79a8eccdf75a5ffa5ae5294010cf2ba9c79a1691607a8a1ce5a1"
	Oct 02 10:40:33 functional-680000 kubelet[7123]: I1002 10:40:33.841105    7123 scope.go:117] "RemoveContainer" containerID="29ea746b27dd34debaff2b002fd6116c262ac75c062887f81ccf7847f959067b"
	Oct 02 10:40:33 functional-680000 kubelet[7123]: E1002 10:40:33.841222    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-9jzx9_default(fad13bf5-4ef5-4d4c-856a-085c5527db25)\"" pod="default/hello-node-759d89bdcc-9jzx9" podUID="fad13bf5-4ef5-4d4c-856a-085c5527db25"
	Oct 02 10:40:39 functional-680000 kubelet[7123]: I1002 10:40:39.842036    7123 scope.go:117] "RemoveContainer" containerID="80a18351319db6426fff2fab7f770c248ea8ae9de63f3e7d0dbc8e3b9b48c2c4"
	Oct 02 10:40:39 functional-680000 kubelet[7123]: I1002 10:40:39.851464    7123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=7.148950029 podCreationTimestamp="2023-10-02 10:40:32 +0000 UTC" firstStartedPulling="2023-10-02 10:40:32.883092589 +0000 UTC m=+60.125312666" lastFinishedPulling="2023-10-02 10:40:33.585577176 +0000 UTC m=+60.827797252" observedRunningTime="2023-10-02 10:40:34.335579916 +0000 UTC m=+61.577799993" watchObservedRunningTime="2023-10-02 10:40:39.851434615 +0000 UTC m=+67.093654692"
	Oct 02 10:40:40 functional-680000 kubelet[7123]: I1002 10:40:40.361306    7123 scope.go:117] "RemoveContainer" containerID="80a18351319db6426fff2fab7f770c248ea8ae9de63f3e7d0dbc8e3b9b48c2c4"
	Oct 02 10:40:40 functional-680000 kubelet[7123]: I1002 10:40:40.361505    7123 scope.go:117] "RemoveContainer" containerID="9e031d5c8e220f3f9262c03be3da9e3e03d19c13e3f8a9303ba89b997037aa1f"
	Oct 02 10:40:40 functional-680000 kubelet[7123]: E1002 10:40:40.361611    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-7gk66_default(14b33be8-83df-463d-bed0-39fd241cb0d7)\"" pod="default/hello-node-connect-7799dfb7c6-7gk66" podUID="14b33be8-83df-463d-bed0-39fd241cb0d7"
	Oct 02 10:40:42 functional-680000 kubelet[7123]: I1002 10:40:42.264963    7123 topology_manager.go:215] "Topology Admit Handler" podUID="68ce12de-1774-4560-9092-c9cfd6b1b59a" podNamespace="default" podName="busybox-mount"
	Oct 02 10:40:42 functional-680000 kubelet[7123]: I1002 10:40:42.391420    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/68ce12de-1774-4560-9092-c9cfd6b1b59a-test-volume\") pod \"busybox-mount\" (UID: \"68ce12de-1774-4560-9092-c9cfd6b1b59a\") " pod="default/busybox-mount"
	Oct 02 10:40:42 functional-680000 kubelet[7123]: I1002 10:40:42.391440    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xb9\" (UniqueName: \"kubernetes.io/projected/68ce12de-1774-4560-9092-c9cfd6b1b59a-kube-api-access-99xb9\") pod \"busybox-mount\" (UID: \"68ce12de-1774-4560-9092-c9cfd6b1b59a\") " pod="default/busybox-mount"
	Oct 02 10:40:44 functional-680000 kubelet[7123]: I1002 10:40:44.841536    7123 scope.go:117] "RemoveContainer" containerID="29ea746b27dd34debaff2b002fd6116c262ac75c062887f81ccf7847f959067b"
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.399313    7123 scope.go:117] "RemoveContainer" containerID="29ea746b27dd34debaff2b002fd6116c262ac75c062887f81ccf7847f959067b"
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.399451    7123 scope.go:117] "RemoveContainer" containerID="8a8167ac1d7786041e1f286e782a6b3cfe7aa79aa10d8952bbb89e58582bea3d"
	Oct 02 10:40:45 functional-680000 kubelet[7123]: E1002 10:40:45.399541    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-9jzx9_default(fad13bf5-4ef5-4d4c-856a-085c5527db25)\"" pod="default/hello-node-759d89bdcc-9jzx9" podUID="fad13bf5-4ef5-4d4c-856a-085c5527db25"
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.611149    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/68ce12de-1774-4560-9092-c9cfd6b1b59a-test-volume\") pod \"68ce12de-1774-4560-9092-c9cfd6b1b59a\" (UID: \"68ce12de-1774-4560-9092-c9cfd6b1b59a\") "
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.611200    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ce12de-1774-4560-9092-c9cfd6b1b59a-test-volume" (OuterVolumeSpecName: "test-volume") pod "68ce12de-1774-4560-9092-c9cfd6b1b59a" (UID: "68ce12de-1774-4560-9092-c9cfd6b1b59a"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.611206    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99xb9\" (UniqueName: \"kubernetes.io/projected/68ce12de-1774-4560-9092-c9cfd6b1b59a-kube-api-access-99xb9\") pod \"68ce12de-1774-4560-9092-c9cfd6b1b59a\" (UID: \"68ce12de-1774-4560-9092-c9cfd6b1b59a\") "
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.611223    7123 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/68ce12de-1774-4560-9092-c9cfd6b1b59a-test-volume\") on node \"functional-680000\" DevicePath \"\""
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.613646    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ce12de-1774-4560-9092-c9cfd6b1b59a-kube-api-access-99xb9" (OuterVolumeSpecName: "kube-api-access-99xb9") pod "68ce12de-1774-4560-9092-c9cfd6b1b59a" (UID: "68ce12de-1774-4560-9092-c9cfd6b1b59a"). InnerVolumeSpecName "kube-api-access-99xb9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 10:40:45 functional-680000 kubelet[7123]: I1002 10:40:45.712774    7123 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-99xb9\" (UniqueName: \"kubernetes.io/projected/68ce12de-1774-4560-9092-c9cfd6b1b59a-kube-api-access-99xb9\") on node \"functional-680000\" DevicePath \"\""
	Oct 02 10:40:46 functional-680000 kubelet[7123]: I1002 10:40:46.407212    7123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9267839300f8d1cc9b5b6cfbb004849d6cb9e737a9d04b7d3d57fa0b47d7fb4"
	Oct 02 10:40:51 functional-680000 kubelet[7123]: I1002 10:40:51.841265    7123 scope.go:117] "RemoveContainer" containerID="9e031d5c8e220f3f9262c03be3da9e3e03d19c13e3f8a9303ba89b997037aa1f"
	Oct 02 10:40:51 functional-680000 kubelet[7123]: E1002 10:40:51.841371    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-7gk66_default(14b33be8-83df-463d-bed0-39fd241cb0d7)\"" pod="default/hello-node-connect-7799dfb7c6-7gk66" podUID="14b33be8-83df-463d-bed0-39fd241cb0d7"
	
	* 
	* ==> storage-provisioner [64ee5271293d] <==
	* I1002 10:38:55.719889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:38:57.125278       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:38:57.125309       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:39:14.511874       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:39:14.511947       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-680000_814af073-3272-4d8a-9fe4-74b9a94e4cf6!
	I1002 10:39:14.512318       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe272220-0ded-46e0-9155-9e7d59663890", APIVersion:"v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-680000_814af073-3272-4d8a-9fe4-74b9a94e4cf6 became leader
	I1002 10:39:14.612600       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-680000_814af073-3272-4d8a-9fe4-74b9a94e4cf6!
	
	* 
	* ==> storage-provisioner [ce6849f1995e] <==
	* I1002 10:39:36.384261       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:39:36.393763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:39:36.393803       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:39:53.781084       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:39:53.781147       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-680000_4cb6165e-4eda-4907-986e-0dceca5b1522!
	I1002 10:39:53.781530       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe272220-0ded-46e0-9155-9e7d59663890", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-680000_4cb6165e-4eda-4907-986e-0dceca5b1522 became leader
	I1002 10:39:53.881361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-680000_4cb6165e-4eda-4907-986e-0dceca5b1522!
	I1002 10:40:20.072951       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1002 10:40:20.073428       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f8b99110-2c7d-484a-8961-7f4cfbb16cd0", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1002 10:40:20.073021       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    707e505f-d568-44ec-93f7-fa00dbe95933 388 0 2023-10-02 10:38:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-10-02 10:38:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f8b99110-2c7d-484a-8961-7f4cfbb16cd0 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f8b99110-2c7d-484a-8961-7f4cfbb16cd0 715 0 2023-10-02 10:40:20 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-10-02 10:40:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-10-02 10:40:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1002 10:40:20.073742       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f8b99110-2c7d-484a-8961-7f4cfbb16cd0" provisioned
	I1002 10:40:20.073871       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1002 10:40:20.073893       1 volume_store.go:212] Trying to save persistentvolume "pvc-f8b99110-2c7d-484a-8961-7f4cfbb16cd0"
	I1002 10:40:20.079444       1 volume_store.go:219] persistentvolume "pvc-f8b99110-2c7d-484a-8961-7f4cfbb16cd0" saved
	I1002 10:40:20.079687       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f8b99110-2c7d-484a-8961-7f4cfbb16cd0", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f8b99110-2c7d-484a-8961-7f4cfbb16cd0
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-680000 -n functional-680000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-680000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-680000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-680000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-680000/192.168.105.4
	Start Time:       Mon, 02 Oct 2023 03:40:42 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://68b37f0d213a32c0caa8136827fba8c605d56ca0c2b086acd6fb3ee4ecbde39f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 02 Oct 2023 03:40:43 -0700
	      Finished:     Mon, 02 Oct 2023 03:40:43 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-99xb9 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-99xb9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/busybox-mount to functional-680000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.171s (1.171s including waiting)
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (29.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-330000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-330000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1abf31582948
	Removing intermediate container 1abf31582948
	 ---> 3a4c457336e0
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in a1b9785f4ed7
	Removing intermediate container a1b9785f4ed7
	 ---> 88becc67fba9
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 31a20bdfe9ff
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-330000 -n image-330000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-330000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| start          | -p functional-680000                     | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-680000 --dry-run           | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-680000                     | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:41 PDT |
	|                | -p functional-680000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh findmnt            | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| update-context | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-680000 ssh pgrep              | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-680000 image build -t         | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | localhost/my-image:functional-680000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-680000 image ls               | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	| delete         | -p functional-680000                     | functional-680000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	| start          | -p image-330000 --driver=qemu2           | image-330000      | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-330000      | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-330000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-330000      | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-330000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:41:05
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:41:05.111647    2290 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:41:05.112359    2290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:41:05.112361    2290 out.go:309] Setting ErrFile to fd 2...
	I1002 03:41:05.112363    2290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:41:05.112493    2290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:41:05.113751    2290 out.go:303] Setting JSON to false
	I1002 03:41:05.134086    2290 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":639,"bootTime":1696242626,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:41:05.134167    2290 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:41:05.137418    2290 out.go:177] * [image-330000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:41:05.145362    2290 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:41:05.145366    2290 notify.go:220] Checking for updates...
	I1002 03:41:05.153273    2290 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:41:05.156355    2290 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:41:05.160216    2290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:41:05.163279    2290 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:41:05.166315    2290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:41:05.169318    2290 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:41:05.173265    2290 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:41:05.180280    2290 start.go:298] selected driver: qemu2
	I1002 03:41:05.180284    2290 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:41:05.180289    2290 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:41:05.180357    2290 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:41:05.183251    2290 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:41:05.189040    2290 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1002 03:41:05.189123    2290 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:41:05.189139    2290 cni.go:84] Creating CNI manager for ""
	I1002 03:41:05.189146    2290 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:41:05.189150    2290 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:41:05.189157    2290 start_flags.go:321] config:
	{Name:image-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:image-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:41:05.193971    2290 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:41:05.201267    2290 out.go:177] * Starting control plane node image-330000 in cluster image-330000
	I1002 03:41:05.205244    2290 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:41:05.205259    2290 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:41:05.205269    2290 cache.go:57] Caching tarball of preloaded images
	I1002 03:41:05.205328    2290 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:41:05.205332    2290 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:41:05.205565    2290 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/config.json ...
	I1002 03:41:05.205575    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/config.json: {Name:mk9208f705e4f4b41958aa32852f6839a7d2fb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:05.205815    2290 start.go:365] acquiring machines lock for image-330000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:41:05.205842    2290 start.go:369] acquired machines lock for "image-330000" in 23.708µs
	I1002 03:41:05.205850    2290 start.go:93] Provisioning new machine with config: &{Name:image-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:image-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:41:05.205875    2290 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:41:05.213297    2290 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1002 03:41:05.234140    2290 start.go:159] libmachine.API.Create for "image-330000" (driver="qemu2")
	I1002 03:41:05.234164    2290 client.go:168] LocalClient.Create starting
	I1002 03:41:05.234229    2290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:41:05.234251    2290 main.go:141] libmachine: Decoding PEM data...
	I1002 03:41:05.234262    2290 main.go:141] libmachine: Parsing certificate...
	I1002 03:41:05.234296    2290 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:41:05.234312    2290 main.go:141] libmachine: Decoding PEM data...
	I1002 03:41:05.234318    2290 main.go:141] libmachine: Parsing certificate...
	I1002 03:41:05.234636    2290 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:41:05.342110    2290 main.go:141] libmachine: Creating SSH key...
	I1002 03:41:05.509676    2290 main.go:141] libmachine: Creating Disk image...
	I1002 03:41:05.509681    2290 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:41:05.509877    2290 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/disk.qcow2
	I1002 03:41:05.524095    2290 main.go:141] libmachine: STDOUT: 
	I1002 03:41:05.524115    2290 main.go:141] libmachine: STDERR: 
	I1002 03:41:05.524178    2290 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/disk.qcow2 +20000M
	I1002 03:41:05.531969    2290 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:41:05.531988    2290 main.go:141] libmachine: STDERR: 
	I1002 03:41:05.532010    2290 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/disk.qcow2
	I1002 03:41:05.532014    2290 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:41:05.532050    2290 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:4b:df:74:ef:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/disk.qcow2
	I1002 03:41:05.571230    2290 main.go:141] libmachine: STDOUT: 
	I1002 03:41:05.571252    2290 main.go:141] libmachine: STDERR: 
	I1002 03:41:05.571255    2290 main.go:141] libmachine: Attempt 0
	I1002 03:41:05.571269    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:05.571323    2290 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1002 03:41:05.571341    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:05.571348    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:05.571353    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:07.573464    2290 main.go:141] libmachine: Attempt 1
	I1002 03:41:07.573514    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:07.573898    2290 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1002 03:41:07.573944    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:07.573971    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:07.574025    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:09.574228    2290 main.go:141] libmachine: Attempt 2
	I1002 03:41:09.574242    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:09.574355    2290 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1002 03:41:09.574364    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:09.574369    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:09.574373    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:11.576459    2290 main.go:141] libmachine: Attempt 3
	I1002 03:41:11.576494    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:11.576544    2290 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1002 03:41:11.576551    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:11.576557    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:11.576561    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:13.578559    2290 main.go:141] libmachine: Attempt 4
	I1002 03:41:13.578563    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:13.578597    2290 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1002 03:41:13.578602    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:13.578606    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:13.578611    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:15.580644    2290 main.go:141] libmachine: Attempt 5
	I1002 03:41:15.580653    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:15.580758    2290 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1002 03:41:15.580767    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:15.580771    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:15.580775    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:17.582822    2290 main.go:141] libmachine: Attempt 6
	I1002 03:41:17.582840    2290 main.go:141] libmachine: Searching for 8e:4b:df:74:ef:35 in /var/db/dhcpd_leases ...
	I1002 03:41:17.582958    2290 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:17.582970    2290 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:17.582973    2290 main.go:141] libmachine: Found match: 8e:4b:df:74:ef:35
	I1002 03:41:17.582985    2290 main.go:141] libmachine: IP: 192.168.105.5
	I1002 03:41:17.582989    2290 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1002 03:41:19.602338    2290 machine.go:88] provisioning docker machine ...
	I1002 03:41:19.602387    2290 buildroot.go:166] provisioning hostname "image-330000"
	I1002 03:41:19.602593    2290 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:19.603321    2290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085c760] 0x10085eed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1002 03:41:19.603336    2290 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-330000 && echo "image-330000" | sudo tee /etc/hostname
	I1002 03:41:19.691801    2290 main.go:141] libmachine: SSH cmd err, output: <nil>: image-330000
	
	I1002 03:41:19.691905    2290 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:19.692405    2290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085c760] 0x10085eed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1002 03:41:19.692418    2290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-330000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-330000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-330000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 03:41:19.757250    2290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 03:41:19.757267    2290 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-994/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-994/.minikube}
	I1002 03:41:19.757282    2290 buildroot.go:174] setting up certificates
	I1002 03:41:19.757295    2290 provision.go:83] configureAuth start
	I1002 03:41:19.757299    2290 provision.go:138] copyHostCerts
	I1002 03:41:19.757388    2290 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem, removing ...
	I1002 03:41:19.757394    2290 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem
	I1002 03:41:19.757576    2290 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem (1123 bytes)
	I1002 03:41:19.757831    2290 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem, removing ...
	I1002 03:41:19.757833    2290 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem
	I1002 03:41:19.757904    2290 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem (1679 bytes)
	I1002 03:41:19.758028    2290 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem, removing ...
	I1002 03:41:19.758030    2290 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem
	I1002 03:41:19.758084    2290 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem (1082 bytes)
	I1002 03:41:19.758183    2290 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem org=jenkins.image-330000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-330000]
	I1002 03:41:19.833381    2290 provision.go:172] copyRemoteCerts
	I1002 03:41:19.833417    2290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 03:41:19.833423    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/id_rsa Username:docker}
	I1002 03:41:19.861689    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 03:41:19.868757    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1002 03:41:19.875578    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 03:41:19.883036    2290 provision.go:86] duration metric: configureAuth took 125.740958ms
	I1002 03:41:19.883041    2290 buildroot.go:189] setting minikube options for container-runtime
	I1002 03:41:19.883132    2290 config.go:182] Loaded profile config "image-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:41:19.883160    2290 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:19.883371    2290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085c760] 0x10085eed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1002 03:41:19.883374    2290 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 03:41:19.937200    2290 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 03:41:19.937205    2290 buildroot.go:70] root file system type: tmpfs
	I1002 03:41:19.937257    2290 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 03:41:19.937302    2290 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:19.937533    2290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085c760] 0x10085eed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1002 03:41:19.937564    2290 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 03:41:19.995238    2290 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 03:41:19.995277    2290 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:19.995504    2290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085c760] 0x10085eed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1002 03:41:19.995511    2290 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 03:41:20.350985    2290 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 03:41:20.350993    2290 machine.go:91] provisioned docker machine in 748.656416ms
	I1002 03:41:20.350997    2290 client.go:171] LocalClient.Create took 15.11715275s
	I1002 03:41:20.351012    2290 start.go:167] duration metric: libmachine.API.Create for "image-330000" took 15.117201708s
	I1002 03:41:20.351016    2290 start.go:300] post-start starting for "image-330000" (driver="qemu2")
	I1002 03:41:20.351020    2290 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 03:41:20.351082    2290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 03:41:20.351089    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/id_rsa Username:docker}
	I1002 03:41:20.378539    2290 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 03:41:20.380093    2290 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 03:41:20.380098    2290 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/addons for local assets ...
	I1002 03:41:20.380164    2290 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/files for local assets ...
	I1002 03:41:20.380254    2290 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem -> 14092.pem in /etc/ssl/certs
	I1002 03:41:20.380351    2290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 03:41:20.382907    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem --> /etc/ssl/certs/14092.pem (1708 bytes)
	I1002 03:41:20.389283    2290 start.go:303] post-start completed in 38.263917ms
	I1002 03:41:20.389659    2290 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/config.json ...
	I1002 03:41:20.389833    2290 start.go:128] duration metric: createHost completed in 15.184278917s
	I1002 03:41:20.389862    2290 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:20.390075    2290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085c760] 0x10085eed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1002 03:41:20.390077    2290 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 03:41:20.441501    2290 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696243280.463806710
	
	I1002 03:41:20.441505    2290 fix.go:206] guest clock: 1696243280.463806710
	I1002 03:41:20.441508    2290 fix.go:219] Guest: 2023-10-02 03:41:20.46380671 -0700 PDT Remote: 2023-10-02 03:41:20.38984 -0700 PDT m=+15.302005376 (delta=73.96671ms)
	I1002 03:41:20.441517    2290 fix.go:190] guest clock delta is within tolerance: 73.96671ms
	I1002 03:41:20.441519    2290 start.go:83] releasing machines lock for "image-330000", held for 15.235998916s
	I1002 03:41:20.441769    2290 ssh_runner.go:195] Run: cat /version.json
	I1002 03:41:20.441773    2290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 03:41:20.441777    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/id_rsa Username:docker}
	I1002 03:41:20.441790    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/id_rsa Username:docker}
	I1002 03:41:20.511129    2290 ssh_runner.go:195] Run: systemctl --version
	I1002 03:41:20.513325    2290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 03:41:20.515181    2290 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 03:41:20.515207    2290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 03:41:20.520575    2290 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 03:41:20.520583    2290 start.go:469] detecting cgroup driver to use...
	I1002 03:41:20.520653    2290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:41:20.526107    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 03:41:20.529021    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 03:41:20.531950    2290 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 03:41:20.531972    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 03:41:20.535118    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:41:20.538510    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 03:41:20.541696    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:41:20.544773    2290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 03:41:20.547606    2290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 03:41:20.551050    2290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 03:41:20.554240    2290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 03:41:20.556994    2290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:41:20.629030    2290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 03:41:20.636751    2290 start.go:469] detecting cgroup driver to use...
	I1002 03:41:20.636815    2290 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 03:41:20.642126    2290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:41:20.647186    2290 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 03:41:20.653429    2290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:41:20.658027    2290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:41:20.662967    2290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 03:41:20.703154    2290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:41:20.708620    2290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:41:20.714027    2290 ssh_runner.go:195] Run: which cri-dockerd
	I1002 03:41:20.715339    2290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 03:41:20.718493    2290 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 03:41:20.723744    2290 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 03:41:20.800610    2290 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 03:41:20.868293    2290 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 03:41:20.868345    2290 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 03:41:20.873454    2290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:41:20.952837    2290 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 03:41:22.116588    2290 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163764167s)
	I1002 03:41:22.116640    2290 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 03:41:22.192911    2290 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 03:41:22.267286    2290 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 03:41:22.354177    2290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:41:22.431527    2290 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 03:41:22.439258    2290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:41:22.516079    2290 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 03:41:22.539537    2290 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 03:41:22.539597    2290 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 03:41:22.542452    2290 start.go:537] Will wait 60s for crictl version
	I1002 03:41:22.542498    2290 ssh_runner.go:195] Run: which crictl
	I1002 03:41:22.543962    2290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 03:41:22.563418    2290 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 03:41:22.563483    2290 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 03:41:22.573504    2290 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 03:41:22.590022    2290 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 03:41:22.590146    2290 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1002 03:41:22.591462    2290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 03:41:22.595178    2290 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:41:22.595214    2290 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 03:41:22.600469    2290 docker.go:664] Got preloaded images: 
	I1002 03:41:22.600474    2290 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1002 03:41:22.600512    2290 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 03:41:22.603425    2290 ssh_runner.go:195] Run: which lz4
	I1002 03:41:22.604760    2290 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 03:41:22.605990    2290 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 03:41:22.605999    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I1002 03:41:23.901703    2290 docker.go:628] Took 1.296986 seconds to copy over tarball
	I1002 03:41:23.901757    2290 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 03:41:24.922760    2290 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.021014s)
	I1002 03:41:24.922769    2290 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 03:41:24.939580    2290 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 03:41:24.943016    2290 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1002 03:41:24.948275    2290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:41:25.029511    2290 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 03:41:26.499384    2290 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.469893125s)
	I1002 03:41:26.499465    2290 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 03:41:26.505323    2290 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 03:41:26.505330    2290 cache_images.go:84] Images are preloaded, skipping loading
	I1002 03:41:26.505378    2290 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 03:41:26.512886    2290 cni.go:84] Creating CNI manager for ""
	I1002 03:41:26.512893    2290 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:41:26.512902    2290 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 03:41:26.512910    2290 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-330000 NodeName:image-330000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 03:41:26.512990    2290 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-330000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 03:41:26.513025    2290 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-330000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:image-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 03:41:26.513099    2290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 03:41:26.516043    2290 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 03:41:26.516067    2290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 03:41:26.518967    2290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 03:41:26.524435    2290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 03:41:26.529277    2290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I1002 03:41:26.534571    2290 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I1002 03:41:26.535929    2290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 03:41:26.539473    2290 certs.go:56] Setting up /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000 for IP: 192.168.105.5
	I1002 03:41:26.539480    2290 certs.go:190] acquiring lock for shared ca certs: {Name:mkb95ac88d0fec37f1e658f6bb500deee9ee7493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.539608    2290 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key
	I1002 03:41:26.539641    2290 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key
	I1002 03:41:26.539666    2290 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/client.key
	I1002 03:41:26.539673    2290 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/client.crt with IP's: []
	I1002 03:41:26.628653    2290 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/client.crt ...
	I1002 03:41:26.628656    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/client.crt: {Name:mkabcc7c02ada329a30e81d79090c3ddaa23db8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.628891    2290 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/client.key ...
	I1002 03:41:26.628893    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/client.key: {Name:mk923c851f30202075bb4a3c822a66452ad1635b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.629008    2290 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.key.e69b33ca
	I1002 03:41:26.629014    2290 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 03:41:26.758323    2290 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.crt.e69b33ca ...
	I1002 03:41:26.758327    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.crt.e69b33ca: {Name:mkc07b9f0468ca5a46d9a36de80719ff3e713cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.758515    2290 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.key.e69b33ca ...
	I1002 03:41:26.758517    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.key.e69b33ca: {Name:mk080538876e151fd84ec37efd70631dfc478ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.758619    2290 certs.go:337] copying /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.crt
	I1002 03:41:26.758876    2290 certs.go:341] copying /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.key
	I1002 03:41:26.759009    2290 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.key
	I1002 03:41:26.759014    2290 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.crt with IP's: []
	I1002 03:41:26.875937    2290 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.crt ...
	I1002 03:41:26.875939    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.crt: {Name:mka4df725f29ab4b10b708b2a289486d2dab4d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.876073    2290 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.key ...
	I1002 03:41:26.876075    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.key: {Name:mkc5f599745b4cb42dba2bd1fb99ecb9cda17fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:26.876295    2290 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem (1338 bytes)
	W1002 03:41:26.876317    2290 certs.go:433] ignoring /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409_empty.pem, impossibly tiny 0 bytes
	I1002 03:41:26.876322    2290 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 03:41:26.876339    2290 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem (1082 bytes)
	I1002 03:41:26.876354    2290 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem (1123 bytes)
	I1002 03:41:26.876372    2290 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem (1679 bytes)
	I1002 03:41:26.876406    2290 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem (1708 bytes)
	I1002 03:41:26.876717    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 03:41:26.884432    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 03:41:26.891191    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 03:41:26.898650    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/image-330000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 03:41:26.906069    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 03:41:26.913518    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 03:41:26.920519    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 03:41:26.927165    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 03:41:26.934469    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem --> /usr/share/ca-certificates/14092.pem (1708 bytes)
	I1002 03:41:26.941917    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 03:41:26.948885    2290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem --> /usr/share/ca-certificates/1409.pem (1338 bytes)
	I1002 03:41:26.955459    2290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 03:41:26.960621    2290 ssh_runner.go:195] Run: openssl version
	I1002 03:41:26.962541    2290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 03:41:26.966142    2290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:41:26.967859    2290 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:41:26.967874    2290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:41:26.969729    2290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 03:41:26.972700    2290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1409.pem && ln -fs /usr/share/ca-certificates/1409.pem /etc/ssl/certs/1409.pem"
	I1002 03:41:26.975619    2290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1409.pem
	I1002 03:41:26.977191    2290 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:37 /usr/share/ca-certificates/1409.pem
	I1002 03:41:26.977205    2290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1409.pem
	I1002 03:41:26.979021    2290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1409.pem /etc/ssl/certs/51391683.0"
	I1002 03:41:26.982550    2290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14092.pem && ln -fs /usr/share/ca-certificates/14092.pem /etc/ssl/certs/14092.pem"
	I1002 03:41:26.986072    2290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14092.pem
	I1002 03:41:26.987603    2290 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:37 /usr/share/ca-certificates/14092.pem
	I1002 03:41:26.987620    2290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14092.pem
	I1002 03:41:26.989586    2290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14092.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 03:41:26.992553    2290 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 03:41:26.993837    2290 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 03:41:26.993867    2290 kubeadm.go:404] StartCluster: {Name:image-330000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:image-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:41:26.993934    2290 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 03:41:26.999465    2290 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 03:41:27.002880    2290 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 03:41:27.006210    2290 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 03:41:27.009346    2290 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 03:41:27.009358    2290 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 03:41:27.032487    2290 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 03:41:27.032518    2290 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 03:41:27.089717    2290 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 03:41:27.089765    2290 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 03:41:27.089816    2290 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 03:41:27.185510    2290 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 03:41:27.194692    2290 out.go:204]   - Generating certificates and keys ...
	I1002 03:41:27.194733    2290 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 03:41:27.194765    2290 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 03:41:27.268847    2290 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 03:41:27.378445    2290 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 03:41:27.745649    2290 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 03:41:27.841519    2290 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 03:41:27.983307    2290 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 03:41:27.983370    2290 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-330000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1002 03:41:28.024660    2290 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 03:41:28.024720    2290 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-330000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1002 03:41:28.085048    2290 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 03:41:28.350318    2290 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 03:41:28.467389    2290 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 03:41:28.467423    2290 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 03:41:28.552958    2290 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 03:41:28.581938    2290 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 03:41:28.680334    2290 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 03:41:28.787877    2290 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 03:41:28.788121    2290 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 03:41:28.789150    2290 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 03:41:28.799357    2290 out.go:204]   - Booting up control plane ...
	I1002 03:41:28.799444    2290 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 03:41:28.799487    2290 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 03:41:28.799515    2290 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 03:41:28.799564    2290 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 03:41:28.799602    2290 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 03:41:28.799618    2290 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 03:41:28.881838    2290 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 03:41:32.884130    2290 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.002394 seconds
	I1002 03:41:32.884214    2290 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 03:41:32.890330    2290 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 03:41:33.398789    2290 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 03:41:33.398888    2290 kubeadm.go:322] [mark-control-plane] Marking the node image-330000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 03:41:33.903818    2290 kubeadm.go:322] [bootstrap-token] Using token: ln1pf9.0fxt3d75hh7kbaj5
	I1002 03:41:33.910123    2290 out.go:204]   - Configuring RBAC rules ...
	I1002 03:41:33.910189    2290 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 03:41:33.911799    2290 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 03:41:33.916210    2290 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 03:41:33.917424    2290 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 03:41:33.918513    2290 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 03:41:33.920171    2290 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 03:41:33.924010    2290 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 03:41:34.099857    2290 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 03:41:34.314350    2290 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 03:41:34.314657    2290 kubeadm.go:322] 
	I1002 03:41:34.314685    2290 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 03:41:34.314687    2290 kubeadm.go:322] 
	I1002 03:41:34.314734    2290 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 03:41:34.314737    2290 kubeadm.go:322] 
	I1002 03:41:34.314758    2290 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 03:41:34.314789    2290 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 03:41:34.314811    2290 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 03:41:34.314812    2290 kubeadm.go:322] 
	I1002 03:41:34.314838    2290 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 03:41:34.314840    2290 kubeadm.go:322] 
	I1002 03:41:34.314861    2290 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 03:41:34.314863    2290 kubeadm.go:322] 
	I1002 03:41:34.314886    2290 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 03:41:34.314926    2290 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 03:41:34.314967    2290 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 03:41:34.314969    2290 kubeadm.go:322] 
	I1002 03:41:34.315007    2290 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 03:41:34.315054    2290 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 03:41:34.315056    2290 kubeadm.go:322] 
	I1002 03:41:34.315098    2290 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ln1pf9.0fxt3d75hh7kbaj5 \
	I1002 03:41:34.315155    2290 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8318d3e3f19b90e0283160a1353f59cf85f53baf0a5ecb509b7354435554388c \
	I1002 03:41:34.315168    2290 kubeadm.go:322] 	--control-plane 
	I1002 03:41:34.315170    2290 kubeadm.go:322] 
	I1002 03:41:34.315213    2290 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 03:41:34.315215    2290 kubeadm.go:322] 
	I1002 03:41:34.315254    2290 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ln1pf9.0fxt3d75hh7kbaj5 \
	I1002 03:41:34.315305    2290 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8318d3e3f19b90e0283160a1353f59cf85f53baf0a5ecb509b7354435554388c 
	I1002 03:41:34.315358    2290 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 03:41:34.315365    2290 cni.go:84] Creating CNI manager for ""
	I1002 03:41:34.315373    2290 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:41:34.321760    2290 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 03:41:34.325929    2290 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 03:41:34.328967    2290 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 03:41:34.333621    2290 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 03:41:34.333656    2290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:41:34.333677    2290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=image-330000 minikube.k8s.io/updated_at=2023_10_02T03_41_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:41:34.394712    2290 kubeadm.go:1081] duration metric: took 61.086625ms to wait for elevateKubeSystemPrivileges.
	I1002 03:41:34.394736    2290 ops.go:34] apiserver oom_adj: -16
	I1002 03:41:34.394740    2290 kubeadm.go:406] StartCluster complete in 7.401033125s
	I1002 03:41:34.394749    2290 settings.go:142] acquiring lock: {Name:mk3f5122457e6ee64cf5dd538efdbb968ff53214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:34.394821    2290 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:41:34.395182    2290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/kubeconfig: {Name:mkba984fcf92a3f610125e890c28c2ff94eec9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:34.395407    2290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 03:41:34.395464    2290 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 03:41:34.395498    2290 addons.go:69] Setting storage-provisioner=true in profile "image-330000"
	I1002 03:41:34.395504    2290 addons.go:231] Setting addon storage-provisioner=true in "image-330000"
	I1002 03:41:34.395515    2290 addons.go:69] Setting default-storageclass=true in profile "image-330000"
	I1002 03:41:34.395522    2290 host.go:66] Checking if "image-330000" exists ...
	I1002 03:41:34.395524    2290 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-330000"
	I1002 03:41:34.395545    2290 config.go:182] Loaded profile config "image-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:41:34.396411    2290 addons.go:231] Setting addon default-storageclass=true in "image-330000"
	I1002 03:41:34.396418    2290 host.go:66] Checking if "image-330000" exists ...
	I1002 03:41:34.400783    2290 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:41:34.397000    2290 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 03:41:34.404763    2290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 03:41:34.404775    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/id_rsa Username:docker}
	I1002 03:41:34.404833    2290 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 03:41:34.404836    2290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 03:41:34.404839    2290 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/image-330000/id_rsa Username:docker}
	I1002 03:41:34.406768    2290 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-330000" context rescaled to 1 replicas
	I1002 03:41:34.406780    2290 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:41:34.414794    2290 out.go:177] * Verifying Kubernetes components...
	I1002 03:41:34.418816    2290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:41:34.436173    2290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 03:41:34.445510    2290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 03:41:34.447421    2290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 03:41:34.447812    2290 api_server.go:52] waiting for apiserver process to appear ...
	I1002 03:41:34.447842    2290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:41:34.949977    2290 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1002 03:41:34.949990    2290 api_server.go:72] duration metric: took 543.213458ms to wait for apiserver process to appear ...
	I1002 03:41:34.949993    2290 api_server.go:88] waiting for apiserver healthz status ...
	I1002 03:41:34.949999    2290 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I1002 03:41:34.960479    2290 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 03:41:34.952898    2290 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I1002 03:41:34.966631    2290 addons.go:502] enable addons completed in 571.201583ms: enabled=[storage-provisioner default-storageclass]
	I1002 03:41:34.961207    2290 api_server.go:141] control plane version: v1.28.2
	I1002 03:41:34.966641    2290 api_server.go:131] duration metric: took 16.646209ms to wait for apiserver health ...
	I1002 03:41:34.966644    2290 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 03:41:34.969250    2290 system_pods.go:59] 5 kube-system pods found
	I1002 03:41:34.969256    2290 system_pods.go:61] "etcd-image-330000" [8e723dda-8567-42a0-b4b2-f75909ded3ec] Pending
	I1002 03:41:34.969258    2290 system_pods.go:61] "kube-apiserver-image-330000" [cb42be6e-f3d5-4ded-9dfe-06c83e0a6987] Pending
	I1002 03:41:34.969260    2290 system_pods.go:61] "kube-controller-manager-image-330000" [da59f877-08c1-4a9d-a201-74997d6ff70a] Pending
	I1002 03:41:34.969261    2290 system_pods.go:61] "kube-scheduler-image-330000" [1a325473-2dd6-4547-b543-28bbb147fa96] Pending
	I1002 03:41:34.969265    2290 system_pods.go:61] "storage-provisioner" [86e7bfaa-072d-4bc4-a7c0-3dcb04f662e3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1002 03:41:34.969267    2290 system_pods.go:74] duration metric: took 2.620875ms to wait for pod list to return data ...
	I1002 03:41:34.969270    2290 kubeadm.go:581] duration metric: took 562.495ms to wait for : map[apiserver:true system_pods:true] ...
	I1002 03:41:34.969275    2290 node_conditions.go:102] verifying NodePressure condition ...
	I1002 03:41:34.970437    2290 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1002 03:41:34.970443    2290 node_conditions.go:123] node cpu capacity is 2
	I1002 03:41:34.970447    2290 node_conditions.go:105] duration metric: took 1.170583ms to run NodePressure ...
	I1002 03:41:34.970451    2290 start.go:228] waiting for startup goroutines ...
	I1002 03:41:34.970453    2290 start.go:233] waiting for cluster config update ...
	I1002 03:41:34.970457    2290 start.go:242] writing updated cluster config ...
	I1002 03:41:34.970711    2290 ssh_runner.go:195] Run: rm -f paused
	I1002 03:41:34.998321    2290 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I1002 03:41:35.002654    2290 out.go:177] * Done! kubectl is now configured to use "image-330000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 10:41:16 UTC, ends at Mon 2023-10-02 10:41:36 UTC. --
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.754944590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.755388840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.755406256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.755415506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:29 image-330000 cri-dockerd[995]: time="2023-10-02T10:41:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/19d564b1027527a509ed111d4f712edbc3fb774aed4316e8a67639b13a4f86f0/resolv.conf as [nameserver 192.168.105.1]"
	Oct 02 10:41:29 image-330000 cri-dockerd[995]: time="2023-10-02T10:41:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/076215bd2a1f8090de9a63be8ff1d645710deafcf25dc544004a5bde3de0660a/resolv.conf as [nameserver 192.168.105.1]"
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.880743465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.880890048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.880916423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.880938090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.893558423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.893698590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.893723465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:41:29 image-330000 dockerd[1107]: time="2023-10-02T10:41:29.893744465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:36 image-330000 dockerd[1101]: time="2023-10-02T10:41:36.055026634Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 02 10:41:36 image-330000 dockerd[1101]: time="2023-10-02T10:41:36.168240676Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 02 10:41:36 image-330000 dockerd[1101]: time="2023-10-02T10:41:36.181111009Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.213286051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.213319843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.213328593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.213335093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:41:36 image-330000 dockerd[1101]: time="2023-10-02T10:41:36.346517884Z" level=info msg="ignoring event" container=31a20bdfe9ff82c12731eb7adbee6959c75c2b5bb069ea8754a128c6b8359283 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.346637551Z" level=info msg="shim disconnected" id=31a20bdfe9ff82c12731eb7adbee6959c75c2b5bb069ea8754a128c6b8359283 namespace=moby
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.346677968Z" level=warning msg="cleaning up after shim disconnected" id=31a20bdfe9ff82c12731eb7adbee6959c75c2b5bb069ea8754a128c6b8359283 namespace=moby
	Oct 02 10:41:36 image-330000 dockerd[1107]: time="2023-10-02T10:41:36.346683176Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33cd619adfe1e       64fc40cee3716       7 seconds ago       Running             kube-scheduler            0                   076215bd2a1f8       kube-scheduler-image-330000
	9c2871fb2742f       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   19d564b102752       etcd-image-330000
	d8cf8dcbebf75       89d57b83c1786       7 seconds ago       Running             kube-controller-manager   0                   5070e5be569bf       kube-controller-manager-image-330000
	5ddd8ac808348       30bb499447fe1       7 seconds ago       Running             kube-apiserver            0                   ce0354dc79885       kube-apiserver-image-330000
	
	* 
	* ==> describe nodes <==
	* Name:               image-330000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-330000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=image-330000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T03_41_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:41:31 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-330000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:41:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:41:34 +0000   Mon, 02 Oct 2023 10:41:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:41:34 +0000   Mon, 02 Oct 2023 10:41:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:41:34 +0000   Mon, 02 Oct 2023 10:41:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 02 Oct 2023 10:41:34 +0000   Mon, 02 Oct 2023 10:41:30 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-330000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 900fd4ce955e4ffbb8a47f929bc0dfae
	  System UUID:                900fd4ce955e4ffbb8a47f929bc0dfae
	  Boot ID:                    d3c63ad1-661e-4e58-bbb7-b9bc7a8b0d76
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-330000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2s
	  kube-system                 kube-apiserver-image-330000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-controller-manager-image-330000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-scheduler-image-330000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 2s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  2s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2s    kubelet  Node image-330000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s    kubelet  Node image-330000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s    kubelet  Node image-330000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Oct 2 10:41] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.662102] EINJ: EINJ table not found.
	[  +0.524799] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.042894] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000871] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.075604] systemd-fstab-generator[477]: Ignoring "noauto" for root device
	[  +0.079304] systemd-fstab-generator[488]: Ignoring "noauto" for root device
	[  +0.422428] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.171140] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.067427] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.085567] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.239566] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +0.073967] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.084241] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.080066] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.083810] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.513309] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +1.450486] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.394767] systemd-fstab-generator[1476]: Ignoring "noauto" for root device
	[  +5.128813] systemd-fstab-generator[2384]: Ignoring "noauto" for root device
	[  +2.210170] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [9c2871fb2742] <==
	* {"level":"info","ts":"2023-10-02T10:41:30.100979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-10-02T10:41:30.101012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-10-02T10:41:30.10177Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T10:41:30.101825Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-10-02T10:41:30.101828Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-10-02T10:41:30.104991Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T10:41:30.105002Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T10:41:30.550749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T10:41:30.550799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T10:41:30.550825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-10-02T10:41:30.550836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T10:41:30.550847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-10-02T10:41:30.550856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T10:41:30.550865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-10-02T10:41:30.554846Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-330000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T10:41:30.554872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:41:30.555331Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-10-02T10:41:30.555388Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:41:30.555525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T10:41:30.555895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T10:41:30.566986Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T10:41:30.56702Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T10:41:30.567109Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:41:30.56716Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T10:41:30.567184Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  10:41:36 up 0 min,  0 users,  load average: 1.06, 0.23, 0.08
	Linux image-330000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5ddd8ac80834] <==
	* I1002 10:41:31.415132       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 10:41:31.415167       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 10:41:31.415200       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 10:41:31.415610       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 10:41:31.415902       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 10:41:31.415916       1 aggregator.go:166] initial CRD sync complete...
	I1002 10:41:31.415919       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 10:41:31.415949       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 10:41:31.415955       1 cache.go:39] Caches are synced for autoregister controller
	I1002 10:41:31.418519       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 10:41:31.433956       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 10:41:31.442678       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 10:41:32.317211       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 10:41:32.318531       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 10:41:32.318537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 10:41:32.445359       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 10:41:32.456232       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 10:41:32.512190       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 10:41:32.515030       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I1002 10:41:32.515347       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 10:41:32.516568       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 10:41:33.366685       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 10:41:34.117798       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 10:41:34.121592       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 10:41:34.125263       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [d8cf8dcbebf7] <==
	* I1002 10:41:30.868633       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 10:41:33.363266       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I1002 10:41:33.368311       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I1002 10:41:33.368403       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I1002 10:41:33.368411       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I1002 10:41:33.371266       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1002 10:41:33.371350       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1002 10:41:33.371361       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I1002 10:41:33.380660       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I1002 10:41:33.380701       1 namespace_controller.go:197] "Starting namespace controller"
	I1002 10:41:33.380708       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I1002 10:41:33.384045       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I1002 10:41:33.384131       1 disruption.go:437] "Sending events to api server."
	I1002 10:41:33.384150       1 disruption.go:448] "Starting disruption controller"
	I1002 10:41:33.384152       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I1002 10:41:33.386485       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I1002 10:41:33.386609       1 replica_set.go:214] "Starting controller" name="replicaset"
	I1002 10:41:33.386665       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I1002 10:41:33.388998       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I1002 10:41:33.389049       1 stateful_set.go:161] "Starting stateful set controller"
	I1002 10:41:33.389056       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I1002 10:41:33.391221       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1002 10:41:33.391275       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I1002 10:41:33.391284       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I1002 10:41:33.463432       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [33cd619adfe1] <==
	* W1002 10:41:31.384972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:41:31.385251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 10:41:31.384987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:41:31.385302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 10:41:31.384999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:41:31.385334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 10:41:31.385454       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:41:31.385463       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 10:41:31.385044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:41:31.385523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 10:41:31.385063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 10:41:31.385568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 10:41:31.385080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:41:31.385627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 10:41:31.385102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 10:41:31.385668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 10:41:32.209715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 10:41:32.209735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 10:41:32.288369       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:41:32.288377       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 10:41:32.356088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 10:41:32.356114       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 10:41:32.373017       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 10:41:32.373103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1002 10:41:34.078094       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 10:41:16 UTC, ends at Mon 2023-10-02 10:41:36 UTC. --
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.269187    2403 kubelet_node_status.go:108] "Node was previously registered" node="image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.269224    2403 kubelet_node_status.go:73] "Successfully registered node" node="image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.289596    2403 topology_manager.go:215] "Topology Admit Handler" podUID="89f0a2b7a1c6fd2e85ad51ceb188212f" podNamespace="kube-system" podName="etcd-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.289656    2403 topology_manager.go:215] "Topology Admit Handler" podUID="14d55717b415001f93d0a20b623067b1" podNamespace="kube-system" podName="kube-apiserver-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.289677    2403 topology_manager.go:215] "Topology Admit Handler" podUID="f6c50c0091b9fa9d4d0de9baf573950e" podNamespace="kube-system" podName="kube-controller-manager-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.289691    2403 topology_manager.go:215] "Topology Admit Handler" podUID="adeecdba08aa84cea9d4f5f478bf9c0a" podNamespace="kube-system" podName="kube-scheduler-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: E1002 10:41:34.293952    2403 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-330000\" already exists" pod="kube-system/kube-scheduler-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363275    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6c50c0091b9fa9d4d0de9baf573950e-ca-certs\") pod \"kube-controller-manager-image-330000\" (UID: \"f6c50c0091b9fa9d4d0de9baf573950e\") " pod="kube-system/kube-controller-manager-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363293    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6c50c0091b9fa9d4d0de9baf573950e-k8s-certs\") pod \"kube-controller-manager-image-330000\" (UID: \"f6c50c0091b9fa9d4d0de9baf573950e\") " pod="kube-system/kube-controller-manager-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363305    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6c50c0091b9fa9d4d0de9baf573950e-kubeconfig\") pod \"kube-controller-manager-image-330000\" (UID: \"f6c50c0091b9fa9d4d0de9baf573950e\") " pod="kube-system/kube-controller-manager-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363315    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6c50c0091b9fa9d4d0de9baf573950e-usr-share-ca-certificates\") pod \"kube-controller-manager-image-330000\" (UID: \"f6c50c0091b9fa9d4d0de9baf573950e\") " pod="kube-system/kube-controller-manager-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363361    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/89f0a2b7a1c6fd2e85ad51ceb188212f-etcd-certs\") pod \"etcd-image-330000\" (UID: \"89f0a2b7a1c6fd2e85ad51ceb188212f\") " pod="kube-system/etcd-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363376    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/89f0a2b7a1c6fd2e85ad51ceb188212f-etcd-data\") pod \"etcd-image-330000\" (UID: \"89f0a2b7a1c6fd2e85ad51ceb188212f\") " pod="kube-system/etcd-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363385    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14d55717b415001f93d0a20b623067b1-ca-certs\") pod \"kube-apiserver-image-330000\" (UID: \"14d55717b415001f93d0a20b623067b1\") " pod="kube-system/kube-apiserver-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363413    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14d55717b415001f93d0a20b623067b1-k8s-certs\") pod \"kube-apiserver-image-330000\" (UID: \"14d55717b415001f93d0a20b623067b1\") " pod="kube-system/kube-apiserver-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363423    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14d55717b415001f93d0a20b623067b1-usr-share-ca-certificates\") pod \"kube-apiserver-image-330000\" (UID: \"14d55717b415001f93d0a20b623067b1\") " pod="kube-system/kube-apiserver-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363432    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f6c50c0091b9fa9d4d0de9baf573950e-flexvolume-dir\") pod \"kube-controller-manager-image-330000\" (UID: \"f6c50c0091b9fa9d4d0de9baf573950e\") " pod="kube-system/kube-controller-manager-image-330000"
	Oct 02 10:41:34 image-330000 kubelet[2403]: I1002 10:41:34.363442    2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adeecdba08aa84cea9d4f5f478bf9c0a-kubeconfig\") pod \"kube-scheduler-image-330000\" (UID: \"adeecdba08aa84cea9d4f5f478bf9c0a\") " pod="kube-system/kube-scheduler-image-330000"
	Oct 02 10:41:35 image-330000 kubelet[2403]: I1002 10:41:35.151231    2403 apiserver.go:52] "Watching apiserver"
	Oct 02 10:41:35 image-330000 kubelet[2403]: I1002 10:41:35.162891    2403 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 02 10:41:35 image-330000 kubelet[2403]: E1002 10:41:35.219201    2403 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-330000\" already exists" pod="kube-system/kube-apiserver-image-330000"
	Oct 02 10:41:35 image-330000 kubelet[2403]: I1002 10:41:35.225588    2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-330000" podStartSLOduration=1.225545842 podCreationTimestamp="2023-10-02 10:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 10:41:35.224637551 +0000 UTC m=+1.123743001" watchObservedRunningTime="2023-10-02 10:41:35.225545842 +0000 UTC m=+1.124651251"
	Oct 02 10:41:35 image-330000 kubelet[2403]: I1002 10:41:35.231514    2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-330000" podStartSLOduration=1.231491384 podCreationTimestamp="2023-10-02 10:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 10:41:35.228532301 +0000 UTC m=+1.127637751" watchObservedRunningTime="2023-10-02 10:41:35.231491384 +0000 UTC m=+1.130596835"
	Oct 02 10:41:35 image-330000 kubelet[2403]: I1002 10:41:35.234920    2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-330000" podStartSLOduration=2.234883009 podCreationTimestamp="2023-10-02 10:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 10:41:35.231633301 +0000 UTC m=+1.130738751" watchObservedRunningTime="2023-10-02 10:41:35.234883009 +0000 UTC m=+1.133988460"
	Oct 02 10:41:35 image-330000 kubelet[2403]: I1002 10:41:35.239413    2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-330000" podStartSLOduration=1.239391842 podCreationTimestamp="2023-10-02 10:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 10:41:35.235050842 +0000 UTC m=+1.134156293" watchObservedRunningTime="2023-10-02 10:41:35.239391842 +0000 UTC m=+1.138497293"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-330000 -n image-330000
helpers_test.go:261: (dbg) Run:  kubectl --context image-330000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-330000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-330000 describe pod storage-provisioner: exit status 1 (38.672667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-330000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (50.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-545000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-545000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.299855833s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-545000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-545000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [658ed7f5-b147-4641-b3ea-ec35b0aa62aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [658ed7f5-b147-4641-b3ea-ec35b0aa62aa] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.0118295s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-545000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.0415815s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons disable ingress-dns --alsologtostderr -v=1: (6.165552917s)
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons disable ingress --alsologtostderr -v=1: (7.084483083s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-545000 -n ingress-addon-legacy-545000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-680000 ssh findmnt            | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |                     |
	|                | -T /mount3                               |                             |         |         |                     |                     |
	| update-context | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:40 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-680000                        | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-680000 ssh pgrep              | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-680000 image build -t         | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | localhost/my-image:functional-680000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-680000 image ls               | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	| delete         | -p functional-680000                     | functional-680000           | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	| start          | -p image-330000 --driver=qemu2           | image-330000                | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-330000                | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-330000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-330000                | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-330000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-330000                | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-330000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-330000                | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-330000                          |                             |         |         |                     |                     |
	| delete         | -p image-330000                          | image-330000                | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:41 PDT |
	| start          | -p ingress-addon-legacy-545000           | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:41 PDT | 02 Oct 23 03:42 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-545000              | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:42 PDT | 02 Oct 23 03:43 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-545000              | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:43 PDT | 02 Oct 23 03:43 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-545000              | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:43 PDT | 02 Oct 23 03:43 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-545000 ip           | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:43 PDT | 02 Oct 23 03:43 PDT |
	| addons         | ingress-addon-legacy-545000              | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:43 PDT | 02 Oct 23 03:43 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-545000              | ingress-addon-legacy-545000 | jenkins | v1.31.2 | 02 Oct 23 03:43 PDT | 02 Oct 23 03:43 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:41:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:41:37.504639    2345 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:41:37.504781    2345 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:41:37.504784    2345 out.go:309] Setting ErrFile to fd 2...
	I1002 03:41:37.504786    2345 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:41:37.504914    2345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:41:37.505946    2345 out.go:303] Setting JSON to false
	I1002 03:41:37.521797    2345 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":671,"bootTime":1696242626,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:41:37.521870    2345 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:41:37.526217    2345 out.go:177] * [ingress-addon-legacy-545000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:41:37.533057    2345 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:41:37.533096    2345 notify.go:220] Checking for updates...
	I1002 03:41:37.537205    2345 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:41:37.540243    2345 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:41:37.543218    2345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:41:37.546202    2345 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:41:37.549238    2345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:41:37.552350    2345 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:41:37.556190    2345 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:41:37.563149    2345 start.go:298] selected driver: qemu2
	I1002 03:41:37.563156    2345 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:41:37.563161    2345 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:41:37.565403    2345 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:41:37.568134    2345 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:41:37.571354    2345 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:41:37.571383    2345 cni.go:84] Creating CNI manager for ""
	I1002 03:41:37.571398    2345 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:41:37.571402    2345 start_flags.go:321] config:
	{Name:ingress-addon-legacy-545000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-545000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:41:37.576110    2345 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:41:37.583195    2345 out.go:177] * Starting control plane node ingress-addon-legacy-545000 in cluster ingress-addon-legacy-545000
	I1002 03:41:37.587011    2345 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 03:41:37.645362    2345 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1002 03:41:37.645398    2345 cache.go:57] Caching tarball of preloaded images
	I1002 03:41:37.645574    2345 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 03:41:37.654222    2345 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1002 03:41:37.662162    2345 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:41:37.741710    2345 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1002 03:41:43.891222    2345 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:41:43.891366    2345 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:41:44.641555    2345 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1002 03:41:44.641734    2345 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/config.json ...
	I1002 03:41:44.641750    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/config.json: {Name:mk92d4960e241c16bd1375c93856d1eb04e31ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:41:44.642009    2345 start.go:365] acquiring machines lock for ingress-addon-legacy-545000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:41:44.642037    2345 start.go:369] acquired machines lock for "ingress-addon-legacy-545000" in 22.125µs
	I1002 03:41:44.642060    2345 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-545000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:41:44.642104    2345 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:41:44.652063    2345 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 03:41:44.667456    2345 start.go:159] libmachine.API.Create for "ingress-addon-legacy-545000" (driver="qemu2")
	I1002 03:41:44.667476    2345 client.go:168] LocalClient.Create starting
	I1002 03:41:44.667558    2345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:41:44.667585    2345 main.go:141] libmachine: Decoding PEM data...
	I1002 03:41:44.667597    2345 main.go:141] libmachine: Parsing certificate...
	I1002 03:41:44.667639    2345 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:41:44.667657    2345 main.go:141] libmachine: Decoding PEM data...
	I1002 03:41:44.667665    2345 main.go:141] libmachine: Parsing certificate...
	I1002 03:41:44.667979    2345 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:41:44.775140    2345 main.go:141] libmachine: Creating SSH key...
	I1002 03:41:44.815847    2345 main.go:141] libmachine: Creating Disk image...
	I1002 03:41:44.815853    2345 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:41:44.816059    2345 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/disk.qcow2
	I1002 03:41:44.824965    2345 main.go:141] libmachine: STDOUT: 
	I1002 03:41:44.824981    2345 main.go:141] libmachine: STDERR: 
	I1002 03:41:44.825031    2345 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/disk.qcow2 +20000M
	I1002 03:41:44.832531    2345 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:41:44.832546    2345 main.go:141] libmachine: STDERR: 
	I1002 03:41:44.832564    2345 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/disk.qcow2
	I1002 03:41:44.832572    2345 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:41:44.832609    2345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:72:e6:c1:17:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/disk.qcow2
	I1002 03:41:44.867434    2345 main.go:141] libmachine: STDOUT: 
	I1002 03:41:44.867456    2345 main.go:141] libmachine: STDERR: 
	I1002 03:41:44.867461    2345 main.go:141] libmachine: Attempt 0
	I1002 03:41:44.867475    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:44.867531    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:44.867550    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:44.867562    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:44.867567    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:44.867589    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:46.869708    2345 main.go:141] libmachine: Attempt 1
	I1002 03:41:46.869793    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:46.870035    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:46.870086    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:46.870188    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:46.870222    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:46.870254    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:48.870817    2345 main.go:141] libmachine: Attempt 2
	I1002 03:41:48.870883    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:48.870994    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:48.871005    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:48.871013    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:48.871018    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:48.871023    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:50.873031    2345 main.go:141] libmachine: Attempt 3
	I1002 03:41:50.873063    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:50.873096    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:50.873104    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:50.873109    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:50.873115    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:50.873120    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:52.875116    2345 main.go:141] libmachine: Attempt 4
	I1002 03:41:52.875124    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:52.875160    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:52.875167    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:52.875173    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:52.875178    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:52.875183    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:54.876565    2345 main.go:141] libmachine: Attempt 5
	I1002 03:41:54.876584    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:54.876645    2345 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1002 03:41:54.876655    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:4b:df:74:ef:35 ID:1,8e:4b:df:74:ef:35 Lease:0x651befcc}
	I1002 03:41:54.876663    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ea:a7:e:53:bb:c6 ID:1,ea:a7:e:53:bb:c6 Lease:0x651bef0d}
	I1002 03:41:54.876669    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:32:59:b5:69:12:e9 ID:1,32:59:b5:69:12:e9 Lease:0x651a9d80}
	I1002 03:41:54.876674    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:6:8b:7a:74:5f:e ID:1,6:8b:7a:74:5f:e Lease:0x651bee73}
	I1002 03:41:56.878748    2345 main.go:141] libmachine: Attempt 6
	I1002 03:41:56.878800    2345 main.go:141] libmachine: Searching for 7a:72:e6:c1:17:7 in /var/db/dhcpd_leases ...
	I1002 03:41:56.878917    2345 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1002 03:41:56.878933    2345 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:7a:72:e6:c1:17:7 ID:1,7a:72:e6:c1:17:7 Lease:0x651beff3}
	I1002 03:41:56.878938    2345 main.go:141] libmachine: Found match: 7a:72:e6:c1:17:7
	I1002 03:41:56.878952    2345 main.go:141] libmachine: IP: 192.168.105.6
	I1002 03:41:56.878959    2345 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1002 03:41:58.892060    2345 machine.go:88] provisioning docker machine ...
	I1002 03:41:58.892121    2345 buildroot.go:166] provisioning hostname "ingress-addon-legacy-545000"
	I1002 03:41:58.892277    2345 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:58.893100    2345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b3c760] 0x102b3eed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1002 03:41:58.893126    2345 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-545000 && echo "ingress-addon-legacy-545000" | sudo tee /etc/hostname
	I1002 03:41:58.983222    2345 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-545000
	
	I1002 03:41:58.983340    2345 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:58.983820    2345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b3c760] 0x102b3eed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1002 03:41:58.983845    2345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-545000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-545000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-545000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 03:41:59.051421    2345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 03:41:59.051443    2345 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-994/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-994/.minikube}
	I1002 03:41:59.051482    2345 buildroot.go:174] setting up certificates
	I1002 03:41:59.051495    2345 provision.go:83] configureAuth start
	I1002 03:41:59.051503    2345 provision.go:138] copyHostCerts
	I1002 03:41:59.051542    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem
	I1002 03:41:59.051626    2345 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem, removing ...
	I1002 03:41:59.051635    2345 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem
	I1002 03:41:59.051856    2345 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/key.pem (1679 bytes)
	I1002 03:41:59.052086    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem
	I1002 03:41:59.052110    2345 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem, removing ...
	I1002 03:41:59.052113    2345 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem
	I1002 03:41:59.052182    2345 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/ca.pem (1082 bytes)
	I1002 03:41:59.052296    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem
	I1002 03:41:59.052316    2345 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem, removing ...
	I1002 03:41:59.052324    2345 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem
	I1002 03:41:59.052386    2345 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-994/.minikube/cert.pem (1123 bytes)
	I1002 03:41:59.052514    2345 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-545000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-545000]
	I1002 03:41:59.166935    2345 provision.go:172] copyRemoteCerts
	I1002 03:41:59.166976    2345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 03:41:59.166986    2345 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/id_rsa Username:docker}
	I1002 03:41:59.199112    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 03:41:59.199159    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 03:41:59.205983    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 03:41:59.206016    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1002 03:41:59.213179    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 03:41:59.213220    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 03:41:59.220046    2345 provision.go:86] duration metric: configureAuth took 168.548042ms
	I1002 03:41:59.220054    2345 buildroot.go:189] setting minikube options for container-runtime
	I1002 03:41:59.220152    2345 config.go:182] Loaded profile config "ingress-addon-legacy-545000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 03:41:59.220185    2345 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:59.220404    2345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b3c760] 0x102b3eed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1002 03:41:59.220411    2345 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 03:41:59.276957    2345 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 03:41:59.276966    2345 buildroot.go:70] root file system type: tmpfs
	I1002 03:41:59.277032    2345 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 03:41:59.277078    2345 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:59.277317    2345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b3c760] 0x102b3eed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1002 03:41:59.277352    2345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 03:41:59.339248    2345 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 03:41:59.339299    2345 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:59.339559    2345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b3c760] 0x102b3eed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1002 03:41:59.339572    2345 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 03:41:59.696057    2345 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 03:41:59.696070    2345 machine.go:91] provisioned docker machine in 804.003833ms
	I1002 03:41:59.696076    2345 client.go:171] LocalClient.Create took 15.028900042s
	I1002 03:41:59.696086    2345 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-545000" took 15.028941666s
	I1002 03:41:59.696093    2345 start.go:300] post-start starting for "ingress-addon-legacy-545000" (driver="qemu2")
	I1002 03:41:59.696098    2345 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 03:41:59.696168    2345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 03:41:59.696177    2345 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/id_rsa Username:docker}
	I1002 03:41:59.727263    2345 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 03:41:59.728413    2345 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 03:41:59.728419    2345 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/addons for local assets ...
	I1002 03:41:59.728490    2345 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-994/.minikube/files for local assets ...
	I1002 03:41:59.728586    2345 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem -> 14092.pem in /etc/ssl/certs
	I1002 03:41:59.728591    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem -> /etc/ssl/certs/14092.pem
	I1002 03:41:59.728699    2345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 03:41:59.731021    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem --> /etc/ssl/certs/14092.pem (1708 bytes)
	I1002 03:41:59.737666    2345 start.go:303] post-start completed in 41.568709ms
	I1002 03:41:59.738044    2345 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/config.json ...
	I1002 03:41:59.738215    2345 start.go:128] duration metric: createHost completed in 15.09641125s
	I1002 03:41:59.738243    2345 main.go:141] libmachine: Using SSH client type: native
	I1002 03:41:59.738455    2345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b3c760] 0x102b3eed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1002 03:41:59.738462    2345 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 03:41:59.792110    2345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696243319.519471377
	
	I1002 03:41:59.792116    2345 fix.go:206] guest clock: 1696243319.519471377
	I1002 03:41:59.792120    2345 fix.go:219] Guest: 2023-10-02 03:41:59.519471377 -0700 PDT Remote: 2023-10-02 03:41:59.738221 -0700 PDT m=+22.253188210 (delta=-218.749623ms)
	I1002 03:41:59.792135    2345 fix.go:190] guest clock delta is within tolerance: -218.749623ms
	I1002 03:41:59.792138    2345 start.go:83] releasing machines lock for "ingress-addon-legacy-545000", held for 15.150394s
	I1002 03:41:59.792398    2345 ssh_runner.go:195] Run: cat /version.json
	I1002 03:41:59.792406    2345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 03:41:59.792424    2345 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/id_rsa Username:docker}
	I1002 03:41:59.792406    2345 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/id_rsa Username:docker}
	I1002 03:41:59.861885    2345 ssh_runner.go:195] Run: systemctl --version
	I1002 03:41:59.864025    2345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 03:41:59.865861    2345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 03:41:59.865889    2345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1002 03:41:59.869186    2345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1002 03:41:59.874051    2345 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 03:41:59.874058    2345 start.go:469] detecting cgroup driver to use...
	I1002 03:41:59.874132    2345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:41:59.880024    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1002 03:41:59.882948    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 03:41:59.886146    2345 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 03:41:59.886171    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 03:41:59.889469    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:41:59.892401    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 03:41:59.895253    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 03:41:59.898165    2345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 03:41:59.901474    2345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 03:41:59.904832    2345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 03:41:59.907525    2345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 03:41:59.910286    2345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:41:59.990498    2345 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 03:41:59.999990    2345 start.go:469] detecting cgroup driver to use...
	I1002 03:42:00.000062    2345 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 03:42:00.005955    2345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:42:00.012267    2345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 03:42:00.020320    2345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 03:42:00.025326    2345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:42:00.030258    2345 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 03:42:00.081470    2345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 03:42:00.087686    2345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 03:42:00.093191    2345 ssh_runner.go:195] Run: which cri-dockerd
	I1002 03:42:00.094537    2345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 03:42:00.097447    2345 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 03:42:00.102433    2345 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 03:42:00.181035    2345 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 03:42:00.255417    2345 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 03:42:00.255479    2345 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 03:42:00.260589    2345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:42:00.339695    2345 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 03:42:01.499247    2345 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159559625s)
	I1002 03:42:01.499319    2345 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 03:42:01.515536    2345 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 03:42:01.533767    2345 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1002 03:42:01.533866    2345 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1002 03:42:01.535412    2345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 03:42:01.538792    2345 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1002 03:42:01.538834    2345 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 03:42:01.544114    2345 docker.go:664] Got preloaded images: 
	I1002 03:42:01.544125    2345 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1002 03:42:01.544177    2345 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 03:42:01.546988    2345 ssh_runner.go:195] Run: which lz4
	I1002 03:42:01.548148    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1002 03:42:01.548222    2345 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 03:42:01.549442    2345 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 03:42:01.549454    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1002 03:42:03.190153    2345 docker.go:628] Took 1.641989 seconds to copy over tarball
	I1002 03:42:03.190216    2345 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 03:42:04.514483    2345 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3242775s)
	I1002 03:42:04.514497    2345 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 03:42:04.536288    2345 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 03:42:04.539715    2345 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1002 03:42:04.550317    2345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 03:42:04.614565    2345 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 03:42:06.125927    2345 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.511374291s)
	I1002 03:42:06.126015    2345 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 03:42:06.131946    2345 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1002 03:42:06.131953    2345 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1002 03:42:06.131957    2345 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 03:42:06.145929    2345 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1002 03:42:06.145978    2345 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:42:06.146039    2345 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 03:42:06.146200    2345 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 03:42:06.146286    2345 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 03:42:06.146371    2345 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 03:42:06.146418    2345 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 03:42:06.146591    2345 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 03:42:06.155482    2345 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 03:42:06.155543    2345 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 03:42:06.155604    2345 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 03:42:06.155624    2345 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 03:42:06.155651    2345 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 03:42:06.155673    2345 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1002 03:42:06.155685    2345 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 03:42:06.155719    2345 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 03:42:06.735982    2345 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:06.736099    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1002 03:42:06.742378    2345 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1002 03:42:06.742418    2345 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 03:42:06.742464    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1002 03:42:06.751894    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1002 03:42:06.796397    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1002 03:42:06.802410    2345 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1002 03:42:06.802435    2345 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I1002 03:42:06.802471    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1002 03:42:06.808411    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1002 03:42:06.998416    2345 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:06.998522    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 03:42:07.004282    2345 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1002 03:42:07.004302    2345 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 03:42:07.004346    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 03:42:07.009878    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W1002 03:42:07.214908    2345 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:07.215079    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1002 03:42:07.221017    2345 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1002 03:42:07.221041    2345 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 03:42:07.221078    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1002 03:42:07.226393    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W1002 03:42:07.437851    2345 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:07.437970    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1002 03:42:07.443975    2345 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1002 03:42:07.444005    2345 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 03:42:07.444049    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1002 03:42:07.449799    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W1002 03:42:07.731282    2345 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:07.731400    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1002 03:42:07.737412    2345 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1002 03:42:07.737438    2345 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I1002 03:42:07.737488    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1002 03:42:07.743213    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W1002 03:42:07.877327    2345 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:07.877434    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1002 03:42:07.883340    2345 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1002 03:42:07.883362    2345 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 03:42:07.883401    2345 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1002 03:42:07.889378    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W1002 03:42:08.580367    2345 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 03:42:08.580943    2345 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:42:08.605154    2345 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 03:42:08.605276    2345 docker.go:317] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:42:08.605395    2345 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:42:08.630728    2345 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 03:42:08.630807    2345 cache_images.go:92] LoadImages completed in 2.498894667s
	W1002 03:42:08.630867    2345 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1002 03:42:08.630961    2345 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 03:42:08.645204    2345 cni.go:84] Creating CNI manager for ""
	I1002 03:42:08.645222    2345 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:42:08.645235    2345 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 03:42:08.645261    2345 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-545000 NodeName:ingress-addon-legacy-545000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 03:42:08.645396    2345 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-545000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 03:42:08.645456    2345 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-545000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-545000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 03:42:08.645526    2345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1002 03:42:08.650331    2345 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 03:42:08.650373    2345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 03:42:08.654360    2345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I1002 03:42:08.660961    2345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1002 03:42:08.666797    2345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I1002 03:42:08.672291    2345 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I1002 03:42:08.673589    2345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 03:42:08.677565    2345 certs.go:56] Setting up /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000 for IP: 192.168.105.6
	I1002 03:42:08.677574    2345 certs.go:190] acquiring lock for shared ca certs: {Name:mkb95ac88d0fec37f1e658f6bb500deee9ee7493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.677696    2345 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key
	I1002 03:42:08.677732    2345 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key
	I1002 03:42:08.677760    2345 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.key
	I1002 03:42:08.677767    2345 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt with IP's: []
	I1002 03:42:08.745077    2345 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt ...
	I1002 03:42:08.745083    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: {Name:mkae5211f63d328aa4f7d623ca98deae34f7a65e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.745323    2345 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.key ...
	I1002 03:42:08.745331    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.key: {Name:mkee22e86d4e27c4a564a5d382e257ceec9c817a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.745450    2345 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key.b354f644
	I1002 03:42:08.745458    2345 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 03:42:08.893203    2345 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt.b354f644 ...
	I1002 03:42:08.893208    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt.b354f644: {Name:mkd2db1ceaecc810f80ed33fcd920dd7fc4a159c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.893353    2345 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key.b354f644 ...
	I1002 03:42:08.893356    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key.b354f644: {Name:mka896b167b941691f670843c1c626abe76872a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.893452    2345 certs.go:337] copying /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt
	I1002 03:42:08.893718    2345 certs.go:341] copying /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key
	I1002 03:42:08.893872    2345 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.key
	I1002 03:42:08.893886    2345 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.crt with IP's: []
	I1002 03:42:08.935340    2345 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.crt ...
	I1002 03:42:08.935344    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.crt: {Name:mkd4a009ac3013507ad0c104ca11d391836dda65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.935493    2345 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.key ...
	I1002 03:42:08.935496    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.key: {Name:mkb50c92218e7da3bf975ac043e1513001954050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:08.935615    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 03:42:08.935630    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 03:42:08.935640    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 03:42:08.935649    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 03:42:08.935659    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 03:42:08.935668    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 03:42:08.935677    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 03:42:08.935685    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 03:42:08.935742    2345 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem (1338 bytes)
	W1002 03:42:08.935771    2345 certs.go:433] ignoring /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409_empty.pem, impossibly tiny 0 bytes
	I1002 03:42:08.935776    2345 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 03:42:08.935797    2345 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem (1082 bytes)
	I1002 03:42:08.935815    2345 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem (1123 bytes)
	I1002 03:42:08.935845    2345 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/Users/jenkins/minikube-integration/17340-994/.minikube/certs/key.pem (1679 bytes)
	I1002 03:42:08.935886    2345 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem (1708 bytes)
	I1002 03:42:08.935908    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:42:08.935921    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem -> /usr/share/ca-certificates/1409.pem
	I1002 03:42:08.935930    2345 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem -> /usr/share/ca-certificates/14092.pem
	I1002 03:42:08.936234    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 03:42:08.943413    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 03:42:08.950069    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 03:42:08.957454    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 03:42:08.964675    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 03:42:08.971470    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 03:42:08.978143    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 03:42:08.985345    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 03:42:08.992430    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 03:42:08.999351    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/certs/1409.pem --> /usr/share/ca-certificates/1409.pem (1338 bytes)
	I1002 03:42:09.006099    2345 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/ssl/certs/14092.pem --> /usr/share/ca-certificates/14092.pem (1708 bytes)
	I1002 03:42:09.013206    2345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 03:42:09.018549    2345 ssh_runner.go:195] Run: openssl version
	I1002 03:42:09.020537    2345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1409.pem && ln -fs /usr/share/ca-certificates/1409.pem /etc/ssl/certs/1409.pem"
	I1002 03:42:09.023454    2345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1409.pem
	I1002 03:42:09.024869    2345 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:37 /usr/share/ca-certificates/1409.pem
	I1002 03:42:09.024892    2345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1409.pem
	I1002 03:42:09.026775    2345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1409.pem /etc/ssl/certs/51391683.0"
	I1002 03:42:09.030055    2345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14092.pem && ln -fs /usr/share/ca-certificates/14092.pem /etc/ssl/certs/14092.pem"
	I1002 03:42:09.033458    2345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14092.pem
	I1002 03:42:09.034897    2345 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:37 /usr/share/ca-certificates/14092.pem
	I1002 03:42:09.034917    2345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14092.pem
	I1002 03:42:09.036690    2345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14092.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 03:42:09.039610    2345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 03:42:09.042445    2345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:42:09.044113    2345 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:42:09.044131    2345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 03:42:09.045834    2345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 03:42:09.049175    2345 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 03:42:09.050491    2345 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 03:42:09.050520    2345 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-545000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:42:09.050594    2345 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 03:42:09.056155    2345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 03:42:09.059031    2345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 03:42:09.061906    2345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 03:42:09.065107    2345 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 03:42:09.065122    2345 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 03:42:09.092631    2345 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 03:42:09.092663    2345 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 03:42:09.187617    2345 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 03:42:09.187671    2345 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 03:42:09.187713    2345 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 03:42:09.237176    2345 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 03:42:09.237762    2345 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 03:42:09.237800    2345 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 03:42:09.330222    2345 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 03:42:09.341436    2345 out.go:204]   - Generating certificates and keys ...
	I1002 03:42:09.341473    2345 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 03:42:09.341503    2345 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 03:42:09.416505    2345 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 03:42:09.454928    2345 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 03:42:09.593672    2345 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 03:42:09.766148    2345 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 03:42:10.091601    2345 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 03:42:10.091693    2345 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-545000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1002 03:42:10.331578    2345 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 03:42:10.331655    2345 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-545000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1002 03:42:10.399072    2345 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 03:42:10.454918    2345 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 03:42:10.522165    2345 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 03:42:10.522198    2345 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 03:42:10.610305    2345 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 03:42:10.742950    2345 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 03:42:10.796596    2345 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 03:42:10.904876    2345 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 03:42:10.905115    2345 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 03:42:10.909214    2345 out.go:204]   - Booting up control plane ...
	I1002 03:42:10.909284    2345 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 03:42:10.909338    2345 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 03:42:10.909390    2345 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 03:42:10.909662    2345 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 03:42:10.910881    2345 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 03:42:22.919057    2345 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.007520 seconds
	I1002 03:42:22.919281    2345 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 03:42:22.941526    2345 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 03:42:23.461218    2345 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 03:42:23.461348    2345 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-545000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 03:42:23.981354    2345 kubeadm.go:322] [bootstrap-token] Using token: yrm7f8.b7a8086pmx7x9mlq
	I1002 03:42:23.985435    2345 out.go:204]   - Configuring RBAC rules ...
	I1002 03:42:23.985578    2345 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 03:42:23.990136    2345 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 03:42:23.998667    2345 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 03:42:24.000731    2345 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 03:42:24.003779    2345 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 03:42:24.005838    2345 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 03:42:24.012479    2345 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 03:42:24.205129    2345 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 03:42:24.391158    2345 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 03:42:24.391801    2345 kubeadm.go:322] 
	I1002 03:42:24.391837    2345 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 03:42:24.391843    2345 kubeadm.go:322] 
	I1002 03:42:24.391887    2345 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 03:42:24.391892    2345 kubeadm.go:322] 
	I1002 03:42:24.391905    2345 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 03:42:24.391936    2345 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 03:42:24.391962    2345 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 03:42:24.391965    2345 kubeadm.go:322] 
	I1002 03:42:24.391990    2345 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 03:42:24.392040    2345 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 03:42:24.392079    2345 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 03:42:24.392082    2345 kubeadm.go:322] 
	I1002 03:42:24.392129    2345 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 03:42:24.392176    2345 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 03:42:24.392179    2345 kubeadm.go:322] 
	I1002 03:42:24.392224    2345 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yrm7f8.b7a8086pmx7x9mlq \
	I1002 03:42:24.392304    2345 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8318d3e3f19b90e0283160a1353f59cf85f53baf0a5ecb509b7354435554388c \
	I1002 03:42:24.392317    2345 kubeadm.go:322]     --control-plane 
	I1002 03:42:24.392319    2345 kubeadm.go:322] 
	I1002 03:42:24.392364    2345 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 03:42:24.392370    2345 kubeadm.go:322] 
	I1002 03:42:24.392409    2345 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yrm7f8.b7a8086pmx7x9mlq \
	I1002 03:42:24.392472    2345 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8318d3e3f19b90e0283160a1353f59cf85f53baf0a5ecb509b7354435554388c 
	I1002 03:42:24.392561    2345 kubeadm.go:322] W1002 10:42:08.820069    1415 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 03:42:24.392651    2345 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1002 03:42:24.392708    2345 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1002 03:42:24.392752    2345 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 03:42:24.392827    2345 kubeadm.go:322] W1002 10:42:10.636352    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 03:42:24.392909    2345 kubeadm.go:322] W1002 10:42:10.636971    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 03:42:24.392916    2345 cni.go:84] Creating CNI manager for ""
	I1002 03:42:24.392923    2345 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:42:24.392934    2345 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 03:42:24.393000    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=ingress-addon-legacy-545000 minikube.k8s.io/updated_at=2023_10_02T03_42_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:24.393001    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:24.399366    2345 ops.go:34] apiserver oom_adj: -16
	I1002 03:42:24.464349    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:24.498055    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:25.034943    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:25.534719    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:26.034759    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:26.534744    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:27.034961    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:27.533889    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:28.034852    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:28.534786    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:29.034730    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:29.534533    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:30.034760    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:30.534705    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:31.034802    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:31.534592    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:32.034750    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:32.533187    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:33.034754    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:33.534730    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:34.034660    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:34.534638    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:35.034712    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:35.534611    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:36.034640    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:36.534627    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:37.034598    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:37.534450    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:38.034443    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:38.534580    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:39.032931    2345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 03:42:39.067922    2345 kubeadm.go:1081] duration metric: took 14.675289625s to wait for elevateKubeSystemPrivileges.
	I1002 03:42:39.067937    2345 kubeadm.go:406] StartCluster complete in 30.018047833s
	I1002 03:42:39.067946    2345 settings.go:142] acquiring lock: {Name:mk3f5122457e6ee64cf5dd538efdbb968ff53214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:39.068038    2345 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:42:39.068438    2345 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/kubeconfig: {Name:mkba984fcf92a3f610125e890c28c2ff94eec9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:42:39.068685    2345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 03:42:39.068727    2345 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 03:42:39.068760    2345 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-545000"
	I1002 03:42:39.068772    2345 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-545000"
	I1002 03:42:39.068787    2345 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-545000"
	I1002 03:42:39.068794    2345 host.go:66] Checking if "ingress-addon-legacy-545000" exists ...
	I1002 03:42:39.068799    2345 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-545000"
	I1002 03:42:39.069203    2345 kapi.go:59] client config for ingress-addon-legacy-545000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.key", CAFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e02c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 03:42:39.069336    2345 config.go:182] Loaded profile config "ingress-addon-legacy-545000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1002 03:42:39.069696    2345 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 03:42:39.070242    2345 kapi.go:59] client config for ingress-addon-legacy-545000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.key", CAFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e02c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 03:42:39.070337    2345 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-545000"
	I1002 03:42:39.070351    2345 host.go:66] Checking if "ingress-addon-legacy-545000" exists ...
	I1002 03:42:39.073775    2345 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:42:39.076857    2345 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 03:42:39.076863    2345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 03:42:39.076871    2345 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/id_rsa Username:docker}
	I1002 03:42:39.077667    2345 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 03:42:39.077673    2345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 03:42:39.077677    2345 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/ingress-addon-legacy-545000/id_rsa Username:docker}
	I1002 03:42:39.095194    2345 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-545000" context rescaled to 1 replicas
	I1002 03:42:39.095218    2345 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:42:39.101702    2345 out.go:177] * Verifying Kubernetes components...
	I1002 03:42:39.109791    2345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:42:39.126521    2345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 03:42:39.131987    2345 kapi.go:59] client config for ingress-addon-legacy-545000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.key", CAFile:"/Users/jenkins/minikube-integration/17340-994/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e02c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 03:42:39.132135    2345 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-545000" to be "Ready" ...
	I1002 03:42:39.133729    2345 node_ready.go:49] node "ingress-addon-legacy-545000" has status "Ready":"True"
	I1002 03:42:39.133735    2345 node_ready.go:38] duration metric: took 1.593041ms waiting for node "ingress-addon-legacy-545000" to be "Ready" ...
	I1002 03:42:39.133739    2345 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 03:42:39.136849    2345 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:39.143747    2345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 03:42:39.158313    2345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 03:42:39.306724    2345 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1002 03:42:39.366524    2345 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 03:42:39.373478    2345 addons.go:502] enable addons completed in 304.761209ms: enabled=[storage-provisioner default-storageclass]
	I1002 03:42:41.146215    2345 pod_ready.go:102] pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace has status "Ready":"False"
	I1002 03:42:43.155873    2345 pod_ready.go:102] pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace has status "Ready":"False"
	I1002 03:42:45.158185    2345 pod_ready.go:102] pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace has status "Ready":"False"
	I1002 03:42:47.656889    2345 pod_ready.go:102] pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace has status "Ready":"False"
	I1002 03:42:50.157261    2345 pod_ready.go:102] pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace has status "Ready":"False"
	I1002 03:42:50.651980    2345 pod_ready.go:97] error getting pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-cdc7h" not found
	I1002 03:42:50.652086    2345 pod_ready.go:81] duration metric: took 11.515454958s waiting for pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace to be "Ready" ...
	E1002 03:42:50.652110    2345 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-cdc7h" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-cdc7h" not found
	I1002 03:42:50.652124    2345 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zr8t9" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.661298    2345 pod_ready.go:92] pod "coredns-66bff467f8-zr8t9" in "kube-system" namespace has status "Ready":"True"
	I1002 03:42:50.661337    2345 pod_ready.go:81] duration metric: took 9.202ms waiting for pod "coredns-66bff467f8-zr8t9" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.661354    2345 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.668039    2345 pod_ready.go:92] pod "etcd-ingress-addon-legacy-545000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:42:50.668056    2345 pod_ready.go:81] duration metric: took 6.692208ms waiting for pod "etcd-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.668067    2345 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.673153    2345 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-545000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:42:50.673167    2345 pod_ready.go:81] duration metric: took 5.090458ms waiting for pod "kube-apiserver-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.673177    2345 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.677640    2345 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-545000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:42:50.677654    2345 pod_ready.go:81] duration metric: took 4.467ms waiting for pod "kube-controller-manager-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.677663    2345 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.845425    2345 request.go:629] Waited for 165.510041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-545000
	I1002 03:42:50.851443    2345 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-545000" in "kube-system" namespace has status "Ready":"True"
	I1002 03:42:50.851472    2345 pod_ready.go:81] duration metric: took 173.800125ms waiting for pod "kube-scheduler-ingress-addon-legacy-545000" in "kube-system" namespace to be "Ready" ...
	I1002 03:42:50.851498    2345 pod_ready.go:38] duration metric: took 11.717991541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 03:42:50.851547    2345 api_server.go:52] waiting for apiserver process to appear ...
	I1002 03:42:50.851832    2345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:42:50.870953    2345 api_server.go:72] duration metric: took 11.775959166s to wait for apiserver process to appear ...
	I1002 03:42:50.870974    2345 api_server.go:88] waiting for apiserver healthz status ...
	I1002 03:42:50.870990    2345 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I1002 03:42:50.879737    2345 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I1002 03:42:50.880838    2345 api_server.go:141] control plane version: v1.18.20
	I1002 03:42:50.880854    2345 api_server.go:131] duration metric: took 9.872834ms to wait for apiserver health ...
	I1002 03:42:50.880862    2345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 03:42:51.045381    2345 request.go:629] Waited for 164.441167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1002 03:42:51.058356    2345 system_pods.go:59] 7 kube-system pods found
	I1002 03:42:51.058399    2345 system_pods.go:61] "coredns-66bff467f8-zr8t9" [5cc69159-7156-4c05-9e29-691f1f0d27d8] Running
	I1002 03:42:51.058413    2345 system_pods.go:61] "etcd-ingress-addon-legacy-545000" [50e39da9-7bbe-4f85-a6eb-42e1a25e8b4d] Running
	I1002 03:42:51.058428    2345 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-545000" [1376119f-ef37-44a0-ad66-e9cb7be7029e] Running
	I1002 03:42:51.058439    2345 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-545000" [fcafe23a-3f0a-4bab-8998-d9a47c6da01e] Running
	I1002 03:42:51.058448    2345 system_pods.go:61] "kube-proxy-kv89x" [a83ce56c-fc49-4b5e-823e-cf2cde34115e] Running
	I1002 03:42:51.058463    2345 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-545000" [ab9d70c0-4704-4d8a-ae62-755f216723a6] Running
	I1002 03:42:51.058475    2345 system_pods.go:61] "storage-provisioner" [0054025f-1375-4a62-a347-c1a77e8b5899] Running
	I1002 03:42:51.058485    2345 system_pods.go:74] duration metric: took 177.618ms to wait for pod list to return data ...
	I1002 03:42:51.058508    2345 default_sa.go:34] waiting for default service account to be created ...
	I1002 03:42:51.245483    2345 request.go:629] Waited for 186.81125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I1002 03:42:51.252938    2345 default_sa.go:45] found service account: "default"
	I1002 03:42:51.252999    2345 default_sa.go:55] duration metric: took 194.456333ms for default service account to be created ...
	I1002 03:42:51.253026    2345 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 03:42:51.445435    2345 request.go:629] Waited for 192.261041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1002 03:42:51.459610    2345 system_pods.go:86] 7 kube-system pods found
	I1002 03:42:51.459650    2345 system_pods.go:89] "coredns-66bff467f8-zr8t9" [5cc69159-7156-4c05-9e29-691f1f0d27d8] Running
	I1002 03:42:51.459661    2345 system_pods.go:89] "etcd-ingress-addon-legacy-545000" [50e39da9-7bbe-4f85-a6eb-42e1a25e8b4d] Running
	I1002 03:42:51.459676    2345 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-545000" [1376119f-ef37-44a0-ad66-e9cb7be7029e] Running
	I1002 03:42:51.459687    2345 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-545000" [fcafe23a-3f0a-4bab-8998-d9a47c6da01e] Running
	I1002 03:42:51.459700    2345 system_pods.go:89] "kube-proxy-kv89x" [a83ce56c-fc49-4b5e-823e-cf2cde34115e] Running
	I1002 03:42:51.459712    2345 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-545000" [ab9d70c0-4704-4d8a-ae62-755f216723a6] Running
	I1002 03:42:51.459723    2345 system_pods.go:89] "storage-provisioner" [0054025f-1375-4a62-a347-c1a77e8b5899] Running
	I1002 03:42:51.459736    2345 system_pods.go:126] duration metric: took 206.698958ms to wait for k8s-apps to be running ...
	I1002 03:42:51.459752    2345 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 03:42:51.459926    2345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:42:51.478087    2345 system_svc.go:56] duration metric: took 18.331125ms WaitForService to wait for kubelet.
	I1002 03:42:51.478110    2345 kubeadm.go:581] duration metric: took 12.383131708s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 03:42:51.478132    2345 node_conditions.go:102] verifying NodePressure condition ...
	I1002 03:42:51.645382    2345 request.go:629] Waited for 167.164792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I1002 03:42:51.655719    2345 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1002 03:42:51.655775    2345 node_conditions.go:123] node cpu capacity is 2
	I1002 03:42:51.655805    2345 node_conditions.go:105] duration metric: took 177.664375ms to run NodePressure ...
	I1002 03:42:51.655835    2345 start.go:228] waiting for startup goroutines ...
	I1002 03:42:51.655853    2345 start.go:233] waiting for cluster config update ...
	I1002 03:42:51.655891    2345 start.go:242] writing updated cluster config ...
	I1002 03:42:51.657271    2345 ssh_runner.go:195] Run: rm -f paused
	I1002 03:42:51.723083    2345 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I1002 03:42:51.727302    2345 out.go:177] 
	W1002 03:42:51.729541    2345 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1002 03:42:51.734199    2345 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1002 03:42:51.740197    2345 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-545000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 10:41:55 UTC, ends at Mon 2023-10-02 10:43:57 UTC. --
	Oct 02 10:43:32 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:32.123666306Z" level=info msg="shim disconnected" id=22b8ddb71f7b7421e918c310bd74bc687c006b8cb23ba6d3748e93d2423c2e85 namespace=moby
	Oct 02 10:43:32 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:32.123693310Z" level=warning msg="cleaning up after shim disconnected" id=22b8ddb71f7b7421e918c310bd74bc687c006b8cb23ba6d3748e93d2423c2e85 namespace=moby
	Oct 02 10:43:32 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:32.123697644Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 10:43:46 ingress-addon-legacy-545000 dockerd[1068]: time="2023-10-02T10:43:46.457030925Z" level=info msg="ignoring event" container=7c29e831feac729d49bdbb27002b03edcac2646df896d71a074f6b6bebc5bcf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:43:46 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:46.457192986Z" level=info msg="shim disconnected" id=7c29e831feac729d49bdbb27002b03edcac2646df896d71a074f6b6bebc5bcf0 namespace=moby
	Oct 02 10:43:46 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:46.457235241Z" level=warning msg="cleaning up after shim disconnected" id=7c29e831feac729d49bdbb27002b03edcac2646df896d71a074f6b6bebc5bcf0 namespace=moby
	Oct 02 10:43:46 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:46.457240992Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.446575312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.446930937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.446968317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.447017572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1068]: time="2023-10-02T10:43:48.480120220Z" level=info msg="ignoring event" container=7a4da49df504b427cac8f989e58ec94707e0af790e78721dec90416947963068 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.480288447Z" level=info msg="shim disconnected" id=7a4da49df504b427cac8f989e58ec94707e0af790e78721dec90416947963068 namespace=moby
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.480317618Z" level=warning msg="cleaning up after shim disconnected" id=7a4da49df504b427cac8f989e58ec94707e0af790e78721dec90416947963068 namespace=moby
	Oct 02 10:43:48 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:48.480321785Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 10:43:52 ingress-addon-legacy-545000 dockerd[1068]: time="2023-10-02T10:43:52.891914083Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=df72f4a0a4539421872ab8b425128818348bf164d64353caac092aef91edc8ab
	Oct 02 10:43:52 ingress-addon-legacy-545000 dockerd[1068]: time="2023-10-02T10:43:52.898231396Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=df72f4a0a4539421872ab8b425128818348bf164d64353caac092aef91edc8ab
	Oct 02 10:43:52 ingress-addon-legacy-545000 dockerd[1068]: time="2023-10-02T10:43:52.982902062Z" level=info msg="ignoring event" container=df72f4a0a4539421872ab8b425128818348bf164d64353caac092aef91edc8ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:43:52 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:52.983459290Z" level=info msg="shim disconnected" id=df72f4a0a4539421872ab8b425128818348bf164d64353caac092aef91edc8ab namespace=moby
	Oct 02 10:43:52 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:52.983699607Z" level=warning msg="cleaning up after shim disconnected" id=df72f4a0a4539421872ab8b425128818348bf164d64353caac092aef91edc8ab namespace=moby
	Oct 02 10:43:52 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:52.983720526Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 10:43:53 ingress-addon-legacy-545000 dockerd[1068]: time="2023-10-02T10:43:53.011432158Z" level=info msg="ignoring event" container=3aca4149ea074beb25986e068d93799fb5850f26a0cc1679d8ce1d01bbe3ef61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 10:43:53 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:53.011875913Z" level=info msg="shim disconnected" id=3aca4149ea074beb25986e068d93799fb5850f26a0cc1679d8ce1d01bbe3ef61 namespace=moby
	Oct 02 10:43:53 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:53.011908125Z" level=warning msg="cleaning up after shim disconnected" id=3aca4149ea074beb25986e068d93799fb5850f26a0cc1679d8ce1d01bbe3ef61 namespace=moby
	Oct 02 10:43:53 ingress-addon-legacy-545000 dockerd[1076]: time="2023-10-02T10:43:53.011913459Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	7a4da49df504   97e050c3e21e                               "/hello-app"             9 seconds ago        Exited (1) 9 seconds ago                  k8s_hello-world-app_hello-world-app-5f5d8b66bb-cxzlh_default_42b7f72b-e396-4c91-80ae-66b7ccc116bd_2
	c36e5279bd2b   k8s.gcr.io/pause:3.2                       "/pause"                 28 seconds ago       Up 28 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-cxzlh_default_42b7f72b-e396-4c91-80ae-66b7ccc116bd_0
	0f489ece20fc   nginx                                      "/docker-entrypoint.…"   34 seconds ago       Up 34 seconds                             k8s_nginx_nginx_default_658ed7f5-b147-4641-b3ea-ec35b0aa62aa_0
	c60a40138ed4   k8s.gcr.io/pause:3.2                       "/pause"                 37 seconds ago       Up 37 seconds                             k8s_POD_nginx_default_658ed7f5-b147-4641-b3ea-ec35b0aa62aa_0
	7c29e831feac   k8s.gcr.io/pause:3.2                       "/pause"                 50 seconds ago       Exited (0) 11 seconds ago                 k8s_POD_kube-ingress-dns-minikube_kube-system_2a9aaab1-09b9-4d26-ba17-870a8c30581d_0
	df72f4a0a453   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   52 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-pxmx6_ingress-nginx_7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59_0
	3aca4149ea07   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-pxmx6_ingress-nginx_7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59_0
	73df48ea6ca5   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-pxkll_ingress-nginx_2e28a0b5-dc8a-4d19-8cbc-38ec43465a88_0
	c9fe9e4ae554   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-twr82_ingress-nginx_100645ad-0f0b-4b90-afec-765656009d9e_0
	a4e2993aeef3   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-pxkll_ingress-nginx_2e28a0b5-dc8a-4d19-8cbc-38ec43465a88_0
	a9523d1b44d8   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-twr82_ingress-nginx_100645ad-0f0b-4b90-afec-765656009d9e_0
	1bc7013260e3   gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   About a minute ago   Up About a minute                         k8s_storage-provisioner_storage-provisioner_kube-system_0054025f-1375-4a62-a347-c1a77e8b5899_0
	b8b8f27b0fd4   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-zr8t9_kube-system_5cc69159-7156-4c05-9e29-691f1f0d27d8_0
	e8d130beea44   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-zr8t9_kube-system_5cc69159-7156-4c05-9e29-691f1f0d27d8_0
	8b0a8351fb53   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-kv89x_kube-system_a83ce56c-fc49-4b5e-823e-cf2cde34115e_0
	d5f4faa9b4f4   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_storage-provisioner_kube-system_0054025f-1375-4a62-a347-c1a77e8b5899_0
	db5e7065afb7   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-kv89x_kube-system_a83ce56c-fc49-4b5e-823e-cf2cde34115e_0
	c6eb89eae0f1   095f37015706                               "kube-scheduler --au…"   About a minute ago   Up About a minute                         k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-545000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	2df5da011446   68a4fac29a86                               "kube-controller-man…"   About a minute ago   Up About a minute                         k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-545000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	0f598fa0ebfe   2694cf044d66                               "kube-apiserver --ad…"   About a minute ago   Up About a minute                         k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-545000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	f5549ae6defa   ab707b0a0ea3                               "etcd --advertise-cl…"   About a minute ago   Up About a minute                         k8s_etcd_etcd-ingress-addon-legacy-545000_kube-system_bc225fd9bc7779fe68cabc4b1e33c44c_0
	d226549dd648   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-scheduler-ingress-addon-legacy-545000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	c3e9047b711e   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-controller-manager-ingress-addon-legacy-545000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	99ccf25b5fef   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_etcd-ingress-addon-legacy-545000_kube-system_bc225fd9bc7779fe68cabc4b1e33c44c_0
	6c166e150662   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-apiserver-ingress-addon-legacy-545000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	time="2023-10-02T10:43:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [b8b8f27b0fd4] <==
	* [INFO] 172.17.0.1:22140 - 24341 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053385s
	[INFO] 172.17.0.1:22140 - 19307 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030339s
	[INFO] 172.17.0.1:22140 - 12444 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027797s
	[INFO] 172.17.0.1:22140 - 43776 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042758s
	[INFO] 172.17.0.1:21963 - 4781 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015961s
	[INFO] 172.17.0.1:21963 - 10194 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068221s
	[INFO] 172.17.0.1:21963 - 23909 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001617s
	[INFO] 172.17.0.1:21963 - 25349 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015544s
	[INFO] 172.17.0.1:21963 - 18178 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015087s
	[INFO] 172.17.0.1:21963 - 42344 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022921s
	[INFO] 172.17.0.1:21963 - 30658 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000023671s
	[INFO] 172.17.0.1:14099 - 12094 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032256s
	[INFO] 172.17.0.1:14099 - 31424 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000020837s
	[INFO] 172.17.0.1:14099 - 58978 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064262s
	[INFO] 172.17.0.1:14099 - 58499 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000020587s
	[INFO] 172.17.0.1:14099 - 18170 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014586s
	[INFO] 172.17.0.1:14099 - 59188 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000021337s
	[INFO] 172.17.0.1:14099 - 9588 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000109312s
	[INFO] 172.17.0.1:44488 - 36041 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00002338s
	[INFO] 172.17.0.1:44488 - 13990 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014503s
	[INFO] 172.17.0.1:44488 - 50687 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001842s
	[INFO] 172.17.0.1:44488 - 52809 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012544s
	[INFO] 172.17.0.1:44488 - 10438 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029381s
	[INFO] 172.17.0.1:44488 - 59398 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012627s
	[INFO] 172.17.0.1:44488 - 60057 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014086s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-545000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-545000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=ingress-addon-legacy-545000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T03_42_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:42:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-545000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:43:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:43:30 +0000   Mon, 02 Oct 2023 10:42:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:43:30 +0000   Mon, 02 Oct 2023 10:42:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:43:30 +0000   Mon, 02 Oct 2023 10:42:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:43:30 +0000   Mon, 02 Oct 2023 10:42:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-545000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 b290fd4e3f4e4debb7750b602f76ca6f
	  System UUID:                b290fd4e3f4e4debb7750b602f76ca6f
	  Boot ID:                    13f52500-e661-426e-ba09-4c7013fb693e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-cxzlh                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-zr8t9                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 etcd-ingress-addon-legacy-545000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-545000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-545000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-kv89x                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-545000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 87s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s   kubelet     Node ingress-addon-legacy-545000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s   kubelet     Node ingress-addon-legacy-545000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s   kubelet     Node ingress-addon-legacy-545000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s   kubelet     Node ingress-addon-legacy-545000 status is now: NodeReady
	  Normal  Starting                 78s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 2 10:41] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.669606] EINJ: EINJ table not found.
	[  +0.527234] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043381] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000858] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.132249] systemd-fstab-generator[485]: Ignoring "noauto" for root device
	[  +0.080072] systemd-fstab-generator[496]: Ignoring "noauto" for root device
	[  +0.428145] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[  +0.189516] systemd-fstab-generator[751]: Ignoring "noauto" for root device
	[  +0.074087] systemd-fstab-generator[762]: Ignoring "noauto" for root device
	[  +0.085130] systemd-fstab-generator[775]: Ignoring "noauto" for root device
	[Oct 2 10:42] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +1.486388] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.219891] systemd-fstab-generator[1536]: Ignoring "noauto" for root device
	[  +8.402777] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.077085] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.289933] systemd-fstab-generator[2619]: Ignoring "noauto" for root device
	[ +15.768756] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.608692] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.075672] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Oct 2 10:43] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.438132] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [f5549ae6defa] <==
	* raft2023/10/02 10:42:18 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/02 10:42:18 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/10/02 10:42:18 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-10-02 10:42:18.756942 W | auth: simple token is not cryptographically signed
	2023-10-02 10:42:18.757711 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-02 10:42:18.891128 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 10:42:18.938025 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/02 10:42:18 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-10-02 10:42:18.938164 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-10-02 10:42:18.938193 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 10:42:18.938230 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/10/02 10:42:19 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/10/02 10:42:19 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/10/02 10:42:19 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/10/02 10:42:19 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/10/02 10:42:19 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-10-02 10:42:19.209697 I | etcdserver: published {Name:ingress-addon-legacy-545000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-10-02 10:42:19.209745 I | embed: ready to serve client requests
	2023-10-02 10:42:19.209980 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-02 10:42:19.210189 I | embed: ready to serve client requests
	2023-10-02 10:42:19.213160 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 10:42:19.215121 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-02 10:42:19.215416 I | embed: serving client requests on 192.168.105.6:2379
	2023-10-02 10:42:19.215625 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-02 10:43:13.387728 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (106.062141ms) to execute
	
	* 
	* ==> kernel <==
	*  10:43:57 up 2 min,  0 users,  load average: 1.12, 0.54, 0.20
	Linux ingress-addon-legacy-545000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0f598fa0ebfe] <==
	* I1002 10:42:21.074243       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1002 10:42:21.091012       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I1002 10:42:21.175953       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1002 10:42:21.175993       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1002 10:42:21.176008       1 cache.go:39] Caches are synced for autoregister controller
	I1002 10:42:21.176478       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 10:42:21.176492       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 10:42:22.077018       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1002 10:42:22.077089       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 10:42:22.101745       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1002 10:42:22.110494       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1002 10:42:22.110524       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1002 10:42:22.246883       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 10:42:22.257555       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 10:42:22.342856       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I1002 10:42:22.343282       1 controller.go:609] quota admission added evaluator for: endpoints
	I1002 10:42:22.344920       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 10:42:23.388161       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1002 10:42:23.927996       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1002 10:42:24.100770       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1002 10:42:30.317462       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 10:42:38.823751       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1002 10:42:39.254031       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1002 10:42:52.046258       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1002 10:43:19.879341       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [2df5da011446] <==
	* I1002 10:42:38.887931       1 range_allocator.go:373] Set node ingress-addon-legacy-545000 PodCIDR to [10.244.0.0/24]
	I1002 10:42:38.888285       1 shared_informer.go:230] Caches are synced for persistent volume 
	E1002 10:42:38.911774       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1002 10:42:39.040917       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"fe73697c-e5cd-4eeb-a5eb-821d669000f8", APIVersion:"apps/v1", ResourceVersion:"334", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1002 10:42:39.061315       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"003823e6-41d4-446c-90c4-dec60f82dbbb", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-cdc7h
	I1002 10:42:39.187325       1 shared_informer.go:230] Caches are synced for endpoint 
	I1002 10:42:39.250216       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1002 10:42:39.260142       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"937c77df-68f6-4f1d-af94-871e07ea7f2d", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-kv89x
	I1002 10:42:39.296045       1 shared_informer.go:230] Caches are synced for stateful set 
	I1002 10:42:39.349091       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 10:42:39.361677       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 10:42:39.389047       1 shared_informer.go:230] Caches are synced for job 
	I1002 10:42:39.393473       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1002 10:42:39.397984       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 10:42:39.421471       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 10:42:39.421512       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1002 10:42:39.436121       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1002 10:42:52.044179       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d6a796cb-eca1-4907-8579-f57d29852816", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1002 10:42:52.051504       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"24ebf51b-99ce-4013-8fd8-ac93d5c4aeb4", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-pxmx6
	I1002 10:42:52.054054       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d55f5e72-98e8-4a71-bc02-b37414e710ea", APIVersion:"batch/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-twr82
	I1002 10:42:52.079605       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f2c03010-32a8-4e1c-8159-cc600bd48388", APIVersion:"batch/v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-pxkll
	I1002 10:42:55.651133       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f2c03010-32a8-4e1c-8159-cc600bd48388", APIVersion:"batch/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 10:42:55.677048       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d55f5e72-98e8-4a71-bc02-b37414e710ea", APIVersion:"batch/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 10:43:29.156904       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"8544ea05-ed5a-4723-af25-8ed4521f974e", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1002 10:43:29.163211       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5883564b-ac05-49f8-ba3f-5a6607211419", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-cxzlh
	
	* 
	* ==> kube-proxy [8b0a8351fb53] <==
	* W1002 10:42:39.777909       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1002 10:42:39.782187       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I1002 10:42:39.782207       1 server_others.go:186] Using iptables Proxier.
	I1002 10:42:39.782382       1 server.go:583] Version: v1.18.20
	I1002 10:42:39.783169       1 config.go:133] Starting endpoints config controller
	I1002 10:42:39.783182       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1002 10:42:39.783343       1 config.go:315] Starting service config controller
	I1002 10:42:39.783348       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1002 10:42:39.883345       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1002 10:42:39.883655       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c6eb89eae0f1] <==
	* W1002 10:42:21.096622       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 10:42:21.127289       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 10:42:21.127326       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 10:42:21.129141       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:42:21.129412       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1002 10:42:21.130088       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 10:42:21.130301       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1002 10:42:21.130331       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1002 10:42:21.131373       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 10:42:21.132016       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:42:21.132179       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 10:42:21.132201       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 10:42:21.132231       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:42:21.132251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 10:42:21.132254       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:42:21.132515       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:42:21.132577       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:42:21.132624       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:42:21.132774       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 10:42:21.991746       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:42:21.995851       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 10:42:22.007575       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:42:22.041658       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:42:22.155840       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1002 10:42:22.430854       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 10:41:55 UTC, ends at Mon 2023-10-02 10:43:58 UTC. --
	Oct 02 10:43:34 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:34.093158    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 22b8ddb71f7b7421e918c310bd74bc687c006b8cb23ba6d3748e93d2423c2e85
	Oct 02 10:43:34 ingress-addon-legacy-545000 kubelet[2626]: E1002 10:43:34.093466    2626 pod_workers.go:191] Error syncing pod 42b7f72b-e396-4c91-80ae-66b7ccc116bd ("hello-world-app-5f5d8b66bb-cxzlh_default(42b7f72b-e396-4c91-80ae-66b7ccc116bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-cxzlh_default(42b7f72b-e396-4c91-80ae-66b7ccc116bd)"
	Oct 02 10:43:41 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:41.384691    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d617fa7604cf2128771604bbd13f227ed7440efe09a9034e8d628500b85cdd5f
	Oct 02 10:43:41 ingress-addon-legacy-545000 kubelet[2626]: E1002 10:43:41.387015    2626 pod_workers.go:191] Error syncing pod 2a9aaab1-09b9-4d26-ba17-870a8c30581d ("kube-ingress-dns-minikube_kube-system(2a9aaab1-09b9-4d26-ba17-870a8c30581d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(2a9aaab1-09b9-4d26-ba17-870a8c30581d)"
	Oct 02 10:43:44 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:44.612235    2626 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-xtslw" (UniqueName: "kubernetes.io/secret/2a9aaab1-09b9-4d26-ba17-870a8c30581d-minikube-ingress-dns-token-xtslw") pod "2a9aaab1-09b9-4d26-ba17-870a8c30581d" (UID: "2a9aaab1-09b9-4d26-ba17-870a8c30581d")
	Oct 02 10:43:44 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:44.615086    2626 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9aaab1-09b9-4d26-ba17-870a8c30581d-minikube-ingress-dns-token-xtslw" (OuterVolumeSpecName: "minikube-ingress-dns-token-xtslw") pod "2a9aaab1-09b9-4d26-ba17-870a8c30581d" (UID: "2a9aaab1-09b9-4d26-ba17-870a8c30581d"). InnerVolumeSpecName "minikube-ingress-dns-token-xtslw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:43:44 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:44.713861    2626 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-xtslw" (UniqueName: "kubernetes.io/secret/2a9aaab1-09b9-4d26-ba17-870a8c30581d-minikube-ingress-dns-token-xtslw") on node "ingress-addon-legacy-545000" DevicePath ""
	Oct 02 10:43:47 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:47.335928    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d617fa7604cf2128771604bbd13f227ed7440efe09a9034e8d628500b85cdd5f
	Oct 02 10:43:48 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:48.383482    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 22b8ddb71f7b7421e918c310bd74bc687c006b8cb23ba6d3748e93d2423c2e85
	Oct 02 10:43:48 ingress-addon-legacy-545000 kubelet[2626]: W1002 10:43:48.493333    2626 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod42b7f72b-e396-4c91-80ae-66b7ccc116bd/7a4da49df504b427cac8f989e58ec94707e0af790e78721dec90416947963068": none of the resources are being tracked.
	Oct 02 10:43:49 ingress-addon-legacy-545000 kubelet[2626]: W1002 10:43:49.386842    2626 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-cxzlh through plugin: invalid network status for
	Oct 02 10:43:49 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:49.393343    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 22b8ddb71f7b7421e918c310bd74bc687c006b8cb23ba6d3748e93d2423c2e85
	Oct 02 10:43:49 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:49.393966    2626 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7a4da49df504b427cac8f989e58ec94707e0af790e78721dec90416947963068
	Oct 02 10:43:49 ingress-addon-legacy-545000 kubelet[2626]: E1002 10:43:49.395080    2626 pod_workers.go:191] Error syncing pod 42b7f72b-e396-4c91-80ae-66b7ccc116bd ("hello-world-app-5f5d8b66bb-cxzlh_default(42b7f72b-e396-4c91-80ae-66b7ccc116bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-cxzlh_default(42b7f72b-e396-4c91-80ae-66b7ccc116bd)"
	Oct 02 10:43:50 ingress-addon-legacy-545000 kubelet[2626]: W1002 10:43:50.423571    2626 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-cxzlh through plugin: invalid network status for
	Oct 02 10:43:50 ingress-addon-legacy-545000 kubelet[2626]: E1002 10:43:50.880877    2626 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pxmx6.178a4469ce17f31e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pxmx6", UID:"7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59", APIVersion:"v1", ResourceVersion:"442", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-545000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec599b45a771e, ext:86979400079, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec599b45a771e, ext:86979400079, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pxmx6.178a4469ce17f31e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 10:43:50 ingress-addon-legacy-545000 kubelet[2626]: E1002 10:43:50.892245    2626 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-pxmx6.178a4469ce17f31e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-pxmx6", UID:"7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59", APIVersion:"v1", ResourceVersion:"442", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-545000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec599b45a771e, ext:86979400079, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec599b4cd7840, ext:86986937009, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-pxmx6.178a4469ce17f31e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 10:43:53 ingress-addon-legacy-545000 kubelet[2626]: W1002 10:43:53.466192    2626 pod_container_deletor.go:77] Container "3aca4149ea074beb25986e068d93799fb5850f26a0cc1679d8ce1d01bbe3ef61" not found in pod's containers
	Oct 02 10:43:55 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:55.047729    2626 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59-webhook-cert") pod "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59" (UID: "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59")
	Oct 02 10:43:55 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:55.048782    2626 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-kf6xd" (UniqueName: "kubernetes.io/secret/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59-ingress-nginx-token-kf6xd") pod "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59" (UID: "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59")
	Oct 02 10:43:55 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:55.058598    2626 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59" (UID: "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:43:55 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:55.064265    2626 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59-ingress-nginx-token-kf6xd" (OuterVolumeSpecName: "ingress-nginx-token-kf6xd") pod "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59" (UID: "7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59"). InnerVolumeSpecName "ingress-nginx-token-kf6xd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:43:55 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:55.152419    2626 reconciler.go:319] Volume detached for volume "ingress-nginx-token-kf6xd" (UniqueName: "kubernetes.io/secret/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59-ingress-nginx-token-kf6xd") on node "ingress-addon-legacy-545000" DevicePath ""
	Oct 02 10:43:55 ingress-addon-legacy-545000 kubelet[2626]: I1002 10:43:55.152522    2626 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59-webhook-cert") on node "ingress-addon-legacy-545000" DevicePath ""
	Oct 02 10:43:56 ingress-addon-legacy-545000 kubelet[2626]: W1002 10:43:56.401861    2626 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7dcd81fb-1fd1-46e5-8fcc-350e5a3c3c59/volumes" does not exist
	
	* 
	* ==> storage-provisioner [1bc7013260e3] <==
	* I1002 10:42:41.141143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:42:41.145005       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:42:41.145025       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:42:41.147852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:42:41.148054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-545000_b30fcf3a-e0d8-48d5-8a9e-e5571f70557c!
	I1002 10:42:41.149588       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d3eeb72-8936-4bd4-ab74-f6f21a4ca9be", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-545000_b30fcf3a-e0d8-48d5-8a9e-e5571f70557c became leader
	I1002 10:42:41.250677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-545000_b30fcf3a-e0d8-48d5-8a9e-e5571f70557c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-545000 -n ingress-addon-legacy-545000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-545000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (50.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-759000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-759000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.020280209s)

                                                
                                                
-- stdout --
	* [mount-start-1-759000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-759000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-759000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-759000 -n mount-start-1-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-759000 -n mount-start-1-759000: exit status 7 (67.106959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-335000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-335000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.651017458s)

                                                
                                                
-- stdout --
	* [multinode-335000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-335000 in cluster multinode-335000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-335000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:46:46.570807    2718 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:46:46.570950    2718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:46:46.570954    2718 out.go:309] Setting ErrFile to fd 2...
	I1002 03:46:46.570956    2718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:46:46.571103    2718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:46:46.572181    2718 out.go:303] Setting JSON to false
	I1002 03:46:46.588155    2718 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":980,"bootTime":1696242626,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:46:46.588246    2718 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:46:46.593606    2718 out.go:177] * [multinode-335000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:46:46.600672    2718 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:46:46.603573    2718 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:46:46.600717    2718 notify.go:220] Checking for updates...
	I1002 03:46:46.606650    2718 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:46:46.609642    2718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:46:46.612616    2718 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:46:46.623642    2718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:46:46.627688    2718 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:46:46.631614    2718 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:46:46.639608    2718 start.go:298] selected driver: qemu2
	I1002 03:46:46.639613    2718 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:46:46.639618    2718 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:46:46.642001    2718 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:46:46.644617    2718 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:46:46.647707    2718 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:46:46.647729    2718 cni.go:84] Creating CNI manager for ""
	I1002 03:46:46.647733    2718 cni.go:136] 0 nodes found, recommending kindnet
	I1002 03:46:46.647738    2718 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 03:46:46.647743    2718 start_flags.go:321] config:
	{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I1002 03:46:46.652187    2718 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:46:46.659690    2718 out.go:177] * Starting control plane node multinode-335000 in cluster multinode-335000
	I1002 03:46:46.662606    2718 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:46:46.662621    2718 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:46:46.662637    2718 cache.go:57] Caching tarball of preloaded images
	I1002 03:46:46.662687    2718 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:46:46.662692    2718 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:46:46.662925    2718 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/multinode-335000/config.json ...
	I1002 03:46:46.662936    2718 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/multinode-335000/config.json: {Name:mkbc5809198da7b5903d8ae4e151991e671342e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:46:46.663137    2718 start.go:365] acquiring machines lock for multinode-335000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:46:46.663167    2718 start.go:369] acquired machines lock for "multinode-335000" in 24.167µs
	I1002 03:46:46.663178    2718 start.go:93] Provisioning new machine with config: &{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:46:46.663206    2718 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:46:46.670594    2718 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:46:46.686755    2718 start.go:159] libmachine.API.Create for "multinode-335000" (driver="qemu2")
	I1002 03:46:46.686792    2718 client.go:168] LocalClient.Create starting
	I1002 03:46:46.686851    2718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:46:46.686887    2718 main.go:141] libmachine: Decoding PEM data...
	I1002 03:46:46.686898    2718 main.go:141] libmachine: Parsing certificate...
	I1002 03:46:46.686931    2718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:46:46.686949    2718 main.go:141] libmachine: Decoding PEM data...
	I1002 03:46:46.686954    2718 main.go:141] libmachine: Parsing certificate...
	I1002 03:46:46.687285    2718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:46:46.794887    2718 main.go:141] libmachine: Creating SSH key...
	I1002 03:46:46.855702    2718 main.go:141] libmachine: Creating Disk image...
	I1002 03:46:46.855708    2718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:46:46.855871    2718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:46:46.864595    2718 main.go:141] libmachine: STDOUT: 
	I1002 03:46:46.864611    2718 main.go:141] libmachine: STDERR: 
	I1002 03:46:46.864671    2718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2 +20000M
	I1002 03:46:46.872089    2718 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:46:46.872103    2718 main.go:141] libmachine: STDERR: 
	I1002 03:46:46.872118    2718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:46:46.872133    2718 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:46:46.872165    2718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b0:e2:2a:fb:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:46:46.873742    2718 main.go:141] libmachine: STDOUT: 
	I1002 03:46:46.873757    2718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:46:46.873776    2718 client.go:171] LocalClient.Create took 186.981084ms
	I1002 03:46:48.875926    2718 start.go:128] duration metric: createHost completed in 2.212739334s
	I1002 03:46:48.876006    2718 start.go:83] releasing machines lock for "multinode-335000", held for 2.212873s
	W1002 03:46:48.876059    2718 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:46:48.888298    2718 out.go:177] * Deleting "multinode-335000" in qemu2 ...
	W1002 03:46:48.909327    2718 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:46:48.909373    2718 start.go:703] Will try again in 5 seconds ...
	I1002 03:46:53.911170    2718 start.go:365] acquiring machines lock for multinode-335000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:46:53.911537    2718 start.go:369] acquired machines lock for "multinode-335000" in 262.791µs
	I1002 03:46:53.911670    2718 start.go:93] Provisioning new machine with config: &{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:46:53.912008    2718 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:46:53.922682    2718 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:46:53.970213    2718 start.go:159] libmachine.API.Create for "multinode-335000" (driver="qemu2")
	I1002 03:46:53.970252    2718 client.go:168] LocalClient.Create starting
	I1002 03:46:53.970696    2718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:46:53.970807    2718 main.go:141] libmachine: Decoding PEM data...
	I1002 03:46:53.970832    2718 main.go:141] libmachine: Parsing certificate...
	I1002 03:46:53.970910    2718 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:46:53.970950    2718 main.go:141] libmachine: Decoding PEM data...
	I1002 03:46:53.970962    2718 main.go:141] libmachine: Parsing certificate...
	I1002 03:46:53.972084    2718 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:46:54.089305    2718 main.go:141] libmachine: Creating SSH key...
	I1002 03:46:54.136656    2718 main.go:141] libmachine: Creating Disk image...
	I1002 03:46:54.136662    2718 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:46:54.136819    2718 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:46:54.145545    2718 main.go:141] libmachine: STDOUT: 
	I1002 03:46:54.145559    2718 main.go:141] libmachine: STDERR: 
	I1002 03:46:54.145609    2718 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2 +20000M
	I1002 03:46:54.153020    2718 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:46:54.153039    2718 main.go:141] libmachine: STDERR: 
	I1002 03:46:54.153053    2718 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:46:54.153058    2718 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:46:54.153106    2718 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c3:9a:bf:40:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:46:54.154692    2718 main.go:141] libmachine: STDOUT: 
	I1002 03:46:54.154706    2718 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:46:54.154721    2718 client.go:171] LocalClient.Create took 184.468792ms
	I1002 03:46:56.156890    2718 start.go:128] duration metric: createHost completed in 2.244887042s
	I1002 03:46:56.156984    2718 start.go:83] releasing machines lock for "multinode-335000", held for 2.245470416s
	W1002 03:46:56.157468    2718 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-335000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-335000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:46:56.166270    2718 out.go:177] 
	W1002 03:46:56.171178    2718 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:46:56.171206    2718 out.go:239] * 
	* 
	W1002 03:46:56.173825    2718 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:46:56.182260    2718 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-335000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (67.793458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.629625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-335000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- rollout status deployment/busybox: exit status 1 (55.091291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (53.413417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.3315ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.960791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.933958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.483833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.811125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.250541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.521084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.089ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1002 03:47:47.255317    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.049375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1002 03:48:07.362906    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:07.369319    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:07.381468    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:07.403700    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:07.445867    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:07.528034    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:07.690173    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:08.012386    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:08.654773    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:09.937113    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:12.499525    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:17.621902    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
E1002 03:48:27.864121    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.459666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.935042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.101ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.17725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.780542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.904875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-335000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (52.789583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.632083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-335000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-335000 -v 3 --alsologtostderr: exit status 89 (40.997125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-335000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:29.554686    2823 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:29.554913    2823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.554916    2823 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:29.554919    2823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.555062    2823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:29.555321    2823 mustload.go:65] Loading cluster: multinode-335000
	I1002 03:48:29.555511    2823 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:29.559757    2823 out.go:177] * The control plane node must be running for this command
	I1002 03:48:29.562870    2823 out.go:177]   To start a cluster, run: "minikube start -p multinode-335000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-335000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.750917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-335000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-335000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-335000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.2\",\"ClusterName\":\"multinode-335000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.744209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status --output json --alsologtostderr: exit status 7 (27.815292ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-335000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:29.720402    2833 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:29.720579    2833 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.720582    2833 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:29.720585    2833 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.720723    2833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:29.720839    2833 out.go:303] Setting JSON to true
	I1002 03:48:29.720851    2833 mustload.go:65] Loading cluster: multinode-335000
	I1002 03:48:29.720901    2833 notify.go:220] Checking for updates...
	I1002 03:48:29.721055    2833 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:29.721060    2833 status.go:255] checking status of multinode-335000 ...
	I1002 03:48:29.721259    2833 status.go:330] multinode-335000 host status = "Stopped" (err=<nil>)
	I1002 03:48:29.721263    2833 status.go:343] host is not running, skipping remaining checks
	I1002 03:48:29.721265    2833 status.go:257] multinode-335000 status: &{Name:multinode-335000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-335000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.687917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 node stop m03: exit status 85 (47.626417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-335000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status: exit status 7 (28.1305ms)

                                                
                                                
-- stdout --
	multinode-335000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr: exit status 7 (27.774666ms)

                                                
                                                
-- stdout --
	multinode-335000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:29.852392    2841 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:29.852578    2841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.852581    2841 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:29.852584    2841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.852710    2841 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:29.852832    2841 out.go:303] Setting JSON to false
	I1002 03:48:29.852842    2841 mustload.go:65] Loading cluster: multinode-335000
	I1002 03:48:29.852904    2841 notify.go:220] Checking for updates...
	I1002 03:48:29.853032    2841 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:29.853037    2841 status.go:255] checking status of multinode-335000 ...
	I1002 03:48:29.853256    2841 status.go:330] multinode-335000 host status = "Stopped" (err=<nil>)
	I1002 03:48:29.853260    2841 status.go:343] host is not running, skipping remaining checks
	I1002 03:48:29.853262    2841 status.go:257] multinode-335000 status: &{Name:multinode-335000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr": multinode-335000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.837333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 node start m03 --alsologtostderr: exit status 85 (45.780666ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:29.908679    2845 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:29.908900    2845 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.908903    2845 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:29.908906    2845 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:29.909040    2845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:29.909284    2845 mustload.go:65] Loading cluster: multinode-335000
	I1002 03:48:29.909472    2845 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:29.914205    2845 out.go:177] 
	W1002 03:48:29.917228    2845 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1002 03:48:29.917234    2845 out.go:239] * 
	* 
	W1002 03:48:29.918727    2845 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:48:29.922206    2845 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1002 03:48:29.908679    2845 out.go:296] Setting OutFile to fd 1 ...
I1002 03:48:29.908900    2845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:48:29.908903    2845 out.go:309] Setting ErrFile to fd 2...
I1002 03:48:29.908906    2845 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:48:29.909040    2845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
I1002 03:48:29.909284    2845 mustload.go:65] Loading cluster: multinode-335000
I1002 03:48:29.909472    2845 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:48:29.914205    2845 out.go:177] 
W1002 03:48:29.917228    2845 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1002 03:48:29.917234    2845 out.go:239] * 
* 
W1002 03:48:29.918727    2845 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 03:48:29.922206    2845 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-335000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status: exit status 7 (27.769042ms)

                                                
                                                
-- stdout --
	multinode-335000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-335000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.775625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-335000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-335000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-335000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-335000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.218700208s)

                                                
                                                
-- stdout --
	* [multinode-335000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-335000 in cluster multinode-335000
	* Restarting existing qemu2 VM for "multinode-335000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-335000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:30.095029    2855 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:30.095162    2855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:30.095165    2855 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:30.095168    2855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:30.095291    2855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:30.096211    2855 out.go:303] Setting JSON to false
	I1002 03:48:30.112166    2855 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1084,"bootTime":1696242626,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:48:30.112244    2855 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:48:30.116099    2855 out.go:177] * [multinode-335000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:48:30.129226    2855 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:48:30.125321    2855 notify.go:220] Checking for updates...
	I1002 03:48:30.138253    2855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:48:30.152202    2855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:48:30.162175    2855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:48:30.173139    2855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:48:30.180206    2855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:48:30.183509    2855 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:30.183555    2855 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:48:30.188189    2855 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:48:30.195159    2855 start.go:298] selected driver: qemu2
	I1002 03:48:30.195166    2855 start.go:902] validating driver "qemu2" against &{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:48:30.195218    2855 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:48:30.197697    2855 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:48:30.197726    2855 cni.go:84] Creating CNI manager for ""
	I1002 03:48:30.197730    2855 cni.go:136] 1 nodes found, recommending kindnet
	I1002 03:48:30.197737    2855 start_flags.go:321] config:
	{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:48:30.202774    2855 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:48:30.210221    2855 out.go:177] * Starting control plane node multinode-335000 in cluster multinode-335000
	I1002 03:48:30.214237    2855 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:48:30.214254    2855 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:48:30.214273    2855 cache.go:57] Caching tarball of preloaded images
	I1002 03:48:30.214332    2855 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:48:30.214337    2855 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:48:30.214404    2855 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/multinode-335000/config.json ...
	I1002 03:48:30.214835    2855 start.go:365] acquiring machines lock for multinode-335000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:48:30.214868    2855 start.go:369] acquired machines lock for "multinode-335000" in 26.958µs
	I1002 03:48:30.214877    2855 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:48:30.214881    2855 fix.go:54] fixHost starting: 
	I1002 03:48:30.215006    2855 fix.go:102] recreateIfNeeded on multinode-335000: state=Stopped err=<nil>
	W1002 03:48:30.215014    2855 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:48:30.223205    2855 out.go:177] * Restarting existing qemu2 VM for "multinode-335000" ...
	I1002 03:48:30.227191    2855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c3:9a:bf:40:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:48:30.229373    2855 main.go:141] libmachine: STDOUT: 
	I1002 03:48:30.229394    2855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:48:30.229425    2855 fix.go:56] fixHost completed within 14.544708ms
	I1002 03:48:30.229431    2855 start.go:83] releasing machines lock for "multinode-335000", held for 14.5585ms
	W1002 03:48:30.229437    2855 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:48:30.229505    2855 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:48:30.229510    2855 start.go:703] Will try again in 5 seconds ...
	I1002 03:48:35.231585    2855 start.go:365] acquiring machines lock for multinode-335000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:48:35.231979    2855 start.go:369] acquired machines lock for "multinode-335000" in 252.417µs
	I1002 03:48:35.232123    2855 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:48:35.232181    2855 fix.go:54] fixHost starting: 
	I1002 03:48:35.232929    2855 fix.go:102] recreateIfNeeded on multinode-335000: state=Stopped err=<nil>
	W1002 03:48:35.232955    2855 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:48:35.237542    2855 out.go:177] * Restarting existing qemu2 VM for "multinode-335000" ...
	I1002 03:48:35.245607    2855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c3:9a:bf:40:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:48:35.254847    2855 main.go:141] libmachine: STDOUT: 
	I1002 03:48:35.254901    2855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:48:35.254992    2855 fix.go:56] fixHost completed within 22.846042ms
	I1002 03:48:35.255010    2855 start.go:83] releasing machines lock for "multinode-335000", held for 23.0105ms
	W1002 03:48:35.255193    2855 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-335000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-335000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:48:35.261434    2855 out.go:177] 
	W1002 03:48:35.265510    2855 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:48:35.265539    2855 out.go:239] * 
	* 
	W1002 03:48:35.268096    2855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:48:35.275390    2855 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-335000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-335000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (31.515292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 node delete m03: exit status 89 (37.752959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-335000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-335000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr: exit status 7 (27.408417ms)

                                                
                                                
-- stdout --
	multinode-335000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:35.450411    2874 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:35.450603    2874 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:35.450606    2874 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:35.450609    2874 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:35.450747    2874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:35.450857    2874 out.go:303] Setting JSON to false
	I1002 03:48:35.450869    2874 mustload.go:65] Loading cluster: multinode-335000
	I1002 03:48:35.450934    2874 notify.go:220] Checking for updates...
	I1002 03:48:35.451089    2874 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:35.451093    2874 status.go:255] checking status of multinode-335000 ...
	I1002 03:48:35.451311    2874 status.go:330] multinode-335000 host status = "Stopped" (err=<nil>)
	I1002 03:48:35.451314    2874 status.go:343] host is not running, skipping remaining checks
	I1002 03:48:35.451316    2874 status.go:257] multinode-335000 status: &{Name:multinode-335000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.753959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status: exit status 7 (28.182208ms)

                                                
                                                
-- stdout --
	multinode-335000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr: exit status 7 (27.416042ms)

                                                
                                                
-- stdout --
	multinode-335000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:35.593755    2882 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:35.593940    2882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:35.593943    2882 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:35.593945    2882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:35.594086    2882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:35.594195    2882 out.go:303] Setting JSON to false
	I1002 03:48:35.594207    2882 mustload.go:65] Loading cluster: multinode-335000
	I1002 03:48:35.594256    2882 notify.go:220] Checking for updates...
	I1002 03:48:35.594406    2882 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:35.594411    2882 status.go:255] checking status of multinode-335000 ...
	I1002 03:48:35.594642    2882 status.go:330] multinode-335000 host status = "Stopped" (err=<nil>)
	I1002 03:48:35.594645    2882 status.go:343] host is not running, skipping remaining checks
	I1002 03:48:35.594647    2882 status.go:257] multinode-335000 status: &{Name:multinode-335000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr": multinode-335000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-335000 status --alsologtostderr": multinode-335000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (27.486625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-335000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-335000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.175863292s)

                                                
                                                
-- stdout --
	* [multinode-335000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-335000 in cluster multinode-335000
	* Restarting existing qemu2 VM for "multinode-335000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-335000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:48:35.648533    2886 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:48:35.648687    2886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:35.648690    2886 out.go:309] Setting ErrFile to fd 2...
	I1002 03:48:35.648701    2886 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:48:35.648839    2886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:48:35.649798    2886 out.go:303] Setting JSON to false
	I1002 03:48:35.665669    2886 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1089,"bootTime":1696242626,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:48:35.665758    2886 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:48:35.670803    2886 out.go:177] * [multinode-335000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:48:35.677818    2886 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:48:35.677889    2886 notify.go:220] Checking for updates...
	I1002 03:48:35.681816    2886 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:48:35.684888    2886 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:48:35.687726    2886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:48:35.690767    2886 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:48:35.693805    2886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:48:35.697160    2886 config.go:182] Loaded profile config "multinode-335000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:48:35.697415    2886 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:48:35.701722    2886 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:48:35.708734    2886 start.go:298] selected driver: qemu2
	I1002 03:48:35.708741    2886 start.go:902] validating driver "qemu2" against &{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:48:35.708801    2886 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:48:35.711266    2886 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:48:35.711290    2886 cni.go:84] Creating CNI manager for ""
	I1002 03:48:35.711295    2886 cni.go:136] 1 nodes found, recommending kindnet
	I1002 03:48:35.711302    2886 start_flags.go:321] config:
	{Name:multinode-335000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-335000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:48:35.715552    2886 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:48:35.722784    2886 out.go:177] * Starting control plane node multinode-335000 in cluster multinode-335000
	I1002 03:48:35.726737    2886 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:48:35.726750    2886 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:48:35.726765    2886 cache.go:57] Caching tarball of preloaded images
	I1002 03:48:35.726810    2886 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:48:35.726815    2886 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:48:35.726882    2886 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/multinode-335000/config.json ...
	I1002 03:48:35.727240    2886 start.go:365] acquiring machines lock for multinode-335000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:48:35.727267    2886 start.go:369] acquired machines lock for "multinode-335000" in 21.25µs
	I1002 03:48:35.727275    2886 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:48:35.727279    2886 fix.go:54] fixHost starting: 
	I1002 03:48:35.727384    2886 fix.go:102] recreateIfNeeded on multinode-335000: state=Stopped err=<nil>
	W1002 03:48:35.727391    2886 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:48:35.735743    2886 out.go:177] * Restarting existing qemu2 VM for "multinode-335000" ...
	I1002 03:48:35.739778    2886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c3:9a:bf:40:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:48:35.741673    2886 main.go:141] libmachine: STDOUT: 
	I1002 03:48:35.741691    2886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:48:35.741718    2886 fix.go:56] fixHost completed within 14.439ms
	I1002 03:48:35.741723    2886 start.go:83] releasing machines lock for "multinode-335000", held for 14.452334ms
	W1002 03:48:35.741728    2886 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:48:35.741768    2886 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:48:35.741772    2886 start.go:703] Will try again in 5 seconds ...
	I1002 03:48:40.741975    2886 start.go:365] acquiring machines lock for multinode-335000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:48:40.742270    2886 start.go:369] acquired machines lock for "multinode-335000" in 223.25µs
	I1002 03:48:40.742387    2886 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:48:40.742406    2886 fix.go:54] fixHost starting: 
	I1002 03:48:40.743125    2886 fix.go:102] recreateIfNeeded on multinode-335000: state=Stopped err=<nil>
	W1002 03:48:40.743151    2886 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:48:40.752441    2886 out.go:177] * Restarting existing qemu2 VM for "multinode-335000" ...
	I1002 03:48:40.756749    2886 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c3:9a:bf:40:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/multinode-335000/disk.qcow2
	I1002 03:48:40.765984    2886 main.go:141] libmachine: STDOUT: 
	I1002 03:48:40.766032    2886 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:48:40.766120    2886 fix.go:56] fixHost completed within 23.713834ms
	I1002 03:48:40.766138    2886 start.go:83] releasing machines lock for "multinode-335000", held for 23.847666ms
	W1002 03:48:40.766304    2886 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-335000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-335000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:48:40.772540    2886 out.go:177] 
	W1002 03:48:40.776637    2886 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:48:40.776661    2886 out.go:239] * 
	* 
	W1002 03:48:40.779562    2886 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:48:40.786612    2886 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-335000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (64.350209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-335000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-335000-m01 --driver=qemu2 
E1002 03:48:48.345291    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-335000-m01 --driver=qemu2 : exit status 80 (9.675042833s)

                                                
                                                
-- stdout --
	* [multinode-335000-m01] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-335000-m01 in cluster multinode-335000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-335000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-335000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-335000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-335000-m02 --driver=qemu2 : exit status 80 (9.789307042s)

                                                
                                                
-- stdout --
	* [multinode-335000-m02] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-335000-m02 in cluster multinode-335000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-335000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-335000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-335000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-335000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-335000: exit status 89 (76.401958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-335000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-335000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-335000 -n multinode-335000: exit status 7 (28.42875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-335000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.70s)

                                                
                                    
x
+
TestPreload (9.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-648000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-648000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.6932435s)

                                                
                                                
-- stdout --
	* [test-preload-648000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-648000 in cluster test-preload-648000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-648000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:49:00.713636    2946 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:49:00.713758    2946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:00.713761    2946 out.go:309] Setting ErrFile to fd 2...
	I1002 03:49:00.713764    2946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:00.713906    2946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:49:00.715002    2946 out.go:303] Setting JSON to false
	I1002 03:49:00.731072    2946 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1114,"bootTime":1696242626,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:49:00.731144    2946 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:49:00.736741    2946 out.go:177] * [test-preload-648000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:49:00.743714    2946 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:49:00.747680    2946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:49:00.743778    2946 notify.go:220] Checking for updates...
	I1002 03:49:00.753691    2946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:49:00.756727    2946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:49:00.759722    2946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:49:00.762736    2946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:49:00.766005    2946 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:49:00.766053    2946 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:49:00.770729    2946 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:49:00.777711    2946 start.go:298] selected driver: qemu2
	I1002 03:49:00.777720    2946 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:49:00.777728    2946 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:49:00.780159    2946 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:49:00.783646    2946 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:49:00.786831    2946 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:49:00.786855    2946 cni.go:84] Creating CNI manager for ""
	I1002 03:49:00.786875    2946 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:49:00.786879    2946 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:49:00.786885    2946 start_flags.go:321] config:
	{Name:test-preload-648000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-648000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:00.791396    2946 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.798731    2946 out.go:177] * Starting control plane node test-preload-648000 in cluster test-preload-648000
	I1002 03:49:00.802738    2946 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1002 03:49:00.802821    2946 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/test-preload-648000/config.json ...
	I1002 03:49:00.802840    2946 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/test-preload-648000/config.json: {Name:mk6118a844d94d82120dbffdf3d6b76ea09b3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:49:00.802857    2946 cache.go:107] acquiring lock: {Name:mkfb901c7f38d77f6d3178f9e744fed622697e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.802862    2946 cache.go:107] acquiring lock: {Name:mk177569153ba63faa55bf99cda9f6dfb5b9ca36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.802870    2946 cache.go:107] acquiring lock: {Name:mkcde5bd596e02e078aeafbfcd906e01534db168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.802887    2946 cache.go:107] acquiring lock: {Name:mk8dfd7e8d950fe280c42afc342716fc09d17c99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.803088    2946 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1002 03:49:00.803109    2946 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1002 03:49:00.803121    2946 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1002 03:49:00.803136    2946 cache.go:107] acquiring lock: {Name:mk1ec6c46cafc3d7bdd0239ff8e3064bf437e60c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.803137    2946 cache.go:107] acquiring lock: {Name:mkdb40371266d2d7c09e6b8960fb9946e980e722 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.803134    2946 cache.go:107] acquiring lock: {Name:mk1e66b4c7095741a38004cb4d1887fd9396957b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.803219    2946 start.go:365] acquiring machines lock for test-preload-648000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:00.803262    2946 start.go:369] acquired machines lock for "test-preload-648000" in 36.541µs
	I1002 03:49:00.803309    2946 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1002 03:49:00.803273    2946 start.go:93] Provisioning new machine with config: &{Name:test-preload-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-648000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:49:00.803317    2946 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:49:00.803327    2946 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1002 03:49:00.807706    2946 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:49:00.803274    2946 cache.go:107] acquiring lock: {Name:mk4358855ce161f5c194105fadb2d048ae2acb7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:00.803404    2946 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1002 03:49:00.803744    2946 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:49:00.808443    2946 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1002 03:49:00.815189    2946 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1002 03:49:00.815191    2946 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1002 03:49:00.815232    2946 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1002 03:49:00.816007    2946 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1002 03:49:00.816026    2946 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1002 03:49:00.816111    2946 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1002 03:49:00.816154    2946 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 03:49:00.819458    2946 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1002 03:49:00.825190    2946 start.go:159] libmachine.API.Create for "test-preload-648000" (driver="qemu2")
	I1002 03:49:00.825208    2946 client.go:168] LocalClient.Create starting
	I1002 03:49:00.825298    2946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:49:00.825328    2946 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:00.825342    2946 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:00.825383    2946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:49:00.825402    2946 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:00.825409    2946 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:00.825788    2946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:49:00.941701    2946 main.go:141] libmachine: Creating SSH key...
	I1002 03:49:01.029536    2946 main.go:141] libmachine: Creating Disk image...
	I1002 03:49:01.029556    2946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:49:01.029763    2946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2
	I1002 03:49:01.039122    2946 main.go:141] libmachine: STDOUT: 
	I1002 03:49:01.039144    2946 main.go:141] libmachine: STDERR: 
	I1002 03:49:01.039223    2946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2 +20000M
	I1002 03:49:01.047718    2946 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:49:01.047736    2946 main.go:141] libmachine: STDERR: 
	I1002 03:49:01.047758    2946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2
	I1002 03:49:01.047772    2946 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:49:01.047836    2946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:0a:65:14:be:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2
	I1002 03:49:01.049844    2946 main.go:141] libmachine: STDOUT: 
	I1002 03:49:01.049860    2946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:01.049878    2946 client.go:171] LocalClient.Create took 224.669917ms
	I1002 03:49:01.459454    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1002 03:49:01.582236    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1002 03:49:01.783772    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1002 03:49:01.974383    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1002 03:49:01.974405    2946 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.171543125s
	I1002 03:49:01.974420    2946 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1002 03:49:02.005608    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1002 03:49:02.140670    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W1002 03:49:02.428670    2946 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1002 03:49:02.428716    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1002 03:49:02.842353    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1002 03:49:02.961437    2946 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 03:49:02.961516    2946 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 03:49:03.050079    2946 start.go:128] duration metric: createHost completed in 2.246785917s
	I1002 03:49:03.050138    2946 start.go:83] releasing machines lock for "test-preload-648000", held for 2.246914917s
	W1002 03:49:03.050193    2946 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:03.064946    2946 out.go:177] * Deleting "test-preload-648000" in qemu2 ...
	W1002 03:49:03.082100    2946 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:03.082144    2946 start.go:703] Will try again in 5 seconds ...
	I1002 03:49:03.183634    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 03:49:03.183682    2946 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.380883333s
	I1002 03:49:03.183710    2946 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 03:49:04.198125    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1002 03:49:04.198172    2946 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.395176875s
	I1002 03:49:04.198198    2946 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1002 03:49:04.501487    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1002 03:49:04.501545    2946 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.698566916s
	I1002 03:49:04.501573    2946 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1002 03:49:05.525501    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1002 03:49:05.525555    2946 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.722778291s
	I1002 03:49:05.525582    2946 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1002 03:49:06.150840    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1002 03:49:06.150895    2946 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.348155709s
	I1002 03:49:06.150926    2946 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1002 03:49:07.049759    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1002 03:49:07.049813    2946 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.246844709s
	I1002 03:49:07.049857    2946 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1002 03:49:08.084177    2946 start.go:365] acquiring machines lock for test-preload-648000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:08.084542    2946 start.go:369] acquired machines lock for "test-preload-648000" in 291.792µs
	I1002 03:49:08.084677    2946 start.go:93] Provisioning new machine with config: &{Name:test-preload-648000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-648000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:49:08.084904    2946 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:49:08.094429    2946 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:49:08.141376    2946 start.go:159] libmachine.API.Create for "test-preload-648000" (driver="qemu2")
	I1002 03:49:08.141409    2946 client.go:168] LocalClient.Create starting
	I1002 03:49:08.141543    2946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:49:08.141592    2946 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:08.141617    2946 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:08.141697    2946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:49:08.141737    2946 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:08.141759    2946 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:08.142243    2946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:49:08.265281    2946 main.go:141] libmachine: Creating SSH key...
	I1002 03:49:08.321056    2946 main.go:141] libmachine: Creating Disk image...
	I1002 03:49:08.321071    2946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:49:08.321244    2946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2
	I1002 03:49:08.330392    2946 main.go:141] libmachine: STDOUT: 
	I1002 03:49:08.330405    2946 main.go:141] libmachine: STDERR: 
	I1002 03:49:08.330451    2946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2 +20000M
	I1002 03:49:08.338082    2946 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:49:08.338092    2946 main.go:141] libmachine: STDERR: 
	I1002 03:49:08.338108    2946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2
	I1002 03:49:08.338117    2946 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:49:08.338162    2946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:19:d5:d0:7c:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/test-preload-648000/disk.qcow2
	I1002 03:49:08.339955    2946 main.go:141] libmachine: STDOUT: 
	I1002 03:49:08.339966    2946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:08.339978    2946 client.go:171] LocalClient.Create took 198.56925ms
	I1002 03:49:08.759408    2946 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1002 03:49:08.759487    2946 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 7.956387375s
	I1002 03:49:08.759532    2946 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1002 03:49:08.759587    2946 cache.go:87] Successfully saved all images to host disk.
	I1002 03:49:10.342251    2946 start.go:128] duration metric: createHost completed in 2.25733475s
	I1002 03:49:10.342360    2946 start.go:83] releasing machines lock for "test-preload-648000", held for 2.257808541s
	W1002 03:49:10.342729    2946 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-648000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:10.352285    2946 out.go:177] 
	W1002 03:49:10.356454    2946 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:49:10.356480    2946 out.go:239] * 
	* 
	W1002 03:49:10.359325    2946 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:49:10.366347    2946 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-648000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-10-02 03:49:10.384346 -0700 PDT m=+850.790708917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-648000 -n test-preload-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-648000 -n test-preload-648000: exit status 7 (63.525167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-648000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-648000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-648000
--- FAIL: TestPreload (9.86s)

                                                
                                    
x
+
TestScheduledStopUnix (9.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-927000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-927000 --memory=2048 --driver=qemu2 : exit status 80 (9.685469917s)

                                                
                                                
-- stdout --
	* [scheduled-stop-927000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-927000 in cluster scheduled-stop-927000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-927000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-927000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-927000 in cluster scheduled-stop-927000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-927000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-10-02 03:49:20.23007 -0700 PDT m=+860.636636334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-927000 -n scheduled-stop-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-927000 -n scheduled-stop-927000: exit status 7 (69.764208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-927000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-927000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-927000
--- FAIL: TestScheduledStopUnix (9.85s)

                                                
                                    
x
+
TestSkaffold (11.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3714111086 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-716000 --memory=2600 --driver=qemu2 
E1002 03:49:29.305324    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-716000 --memory=2600 --driver=qemu2 : exit status 80 (9.902119708s)

                                                
                                                
-- stdout --
	* [skaffold-716000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-716000 in cluster skaffold-716000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-716000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-716000 in cluster skaffold-716000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-10-02 03:49:32.187756 -0700 PDT m=+872.594567001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-716000 -n skaffold-716000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-716000 -n skaffold-716000: exit status 7 (59.488833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-716000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-716000
--- FAIL: TestSkaffold (11.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (137.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-02 03:52:07.574258 -0700 PDT m=+1027.984332834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-489000 -n running-upgrade-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-489000 -n running-upgrade-489000: exit status 85 (84.075083ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-489000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-489000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-489000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-489000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-489000\"")
helpers_test.go:175: Cleaning up "running-upgrade-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-489000
--- FAIL: TestRunningBinaryUpgrade (137.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-474000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-474000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.853749041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-474000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-474000 in cluster kubernetes-upgrade-474000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-474000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:49:34.714149    3254 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:49:34.714295    3254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:34.714298    3254 out.go:309] Setting ErrFile to fd 2...
	I1002 03:49:34.714301    3254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:34.714431    3254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:49:34.715476    3254 out.go:303] Setting JSON to false
	I1002 03:49:34.731529    3254 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1148,"bootTime":1696242626,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:49:34.731627    3254 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:49:34.736704    3254 out.go:177] * [kubernetes-upgrade-474000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:49:34.750676    3254 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:49:34.746745    3254 notify.go:220] Checking for updates...
	I1002 03:49:34.756680    3254 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:49:34.759667    3254 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:49:34.765595    3254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:49:34.769677    3254 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:49:34.772689    3254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:49:34.776009    3254 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:49:34.776085    3254 config.go:182] Loaded profile config "offline-docker-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:49:34.776130    3254 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:49:34.780706    3254 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:49:34.786647    3254 start.go:298] selected driver: qemu2
	I1002 03:49:34.786653    3254 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:49:34.786659    3254 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:49:34.789060    3254 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:49:34.791634    3254 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:49:34.794764    3254 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:49:34.794786    3254 cni.go:84] Creating CNI manager for ""
	I1002 03:49:34.794793    3254 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:49:34.794798    3254 start_flags.go:321] config:
	{Name:kubernetes-upgrade-474000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-474000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:34.799523    3254 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:34.806698    3254 out.go:177] * Starting control plane node kubernetes-upgrade-474000 in cluster kubernetes-upgrade-474000
	I1002 03:49:34.810670    3254 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:49:34.810684    3254 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:49:34.810693    3254 cache.go:57] Caching tarball of preloaded images
	I1002 03:49:34.810742    3254 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:49:34.810748    3254 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1002 03:49:34.810806    3254 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kubernetes-upgrade-474000/config.json ...
	I1002 03:49:34.810817    3254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kubernetes-upgrade-474000/config.json: {Name:mk00e0e01b0acb6e2723c505e441b99c443fe376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:49:34.811030    3254 start.go:365] acquiring machines lock for kubernetes-upgrade-474000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:34.811065    3254 start.go:369] acquired machines lock for "kubernetes-upgrade-474000" in 25.958µs
	I1002 03:49:34.811076    3254 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-474000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-474000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:49:34.811105    3254 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:49:34.819685    3254 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:49:34.837761    3254 start.go:159] libmachine.API.Create for "kubernetes-upgrade-474000" (driver="qemu2")
	I1002 03:49:34.837789    3254 client.go:168] LocalClient.Create starting
	I1002 03:49:34.837856    3254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:49:34.837888    3254 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:34.837902    3254 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:34.837940    3254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:49:34.837962    3254 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:34.837969    3254 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:34.838323    3254 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:49:34.950050    3254 main.go:141] libmachine: Creating SSH key...
	I1002 03:49:35.024777    3254 main.go:141] libmachine: Creating Disk image...
	I1002 03:49:35.024783    3254 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:49:35.024940    3254 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:35.033849    3254 main.go:141] libmachine: STDOUT: 
	I1002 03:49:35.033873    3254 main.go:141] libmachine: STDERR: 
	I1002 03:49:35.033935    3254 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2 +20000M
	I1002 03:49:35.041421    3254 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:49:35.041435    3254 main.go:141] libmachine: STDERR: 
	I1002 03:49:35.041450    3254 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:35.041460    3254 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:49:35.041497    3254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:94:33:7b:a5:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:35.043117    3254 main.go:141] libmachine: STDOUT: 
	I1002 03:49:35.043137    3254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:35.043156    3254 client.go:171] LocalClient.Create took 205.365667ms
	I1002 03:49:37.045326    3254 start.go:128] duration metric: createHost completed in 2.234235375s
	I1002 03:49:37.045426    3254 start.go:83] releasing machines lock for "kubernetes-upgrade-474000", held for 2.234397s
	W1002 03:49:37.045488    3254 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:37.056748    3254 out.go:177] * Deleting "kubernetes-upgrade-474000" in qemu2 ...
	W1002 03:49:37.078421    3254 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:37.078454    3254 start.go:703] Will try again in 5 seconds ...
	I1002 03:49:42.078630    3254 start.go:365] acquiring machines lock for kubernetes-upgrade-474000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:42.078700    3254 start.go:369] acquired machines lock for "kubernetes-upgrade-474000" in 50.875µs
	I1002 03:49:42.078721    3254 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-474000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-474000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:49:42.078783    3254 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:49:42.083680    3254 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:49:42.101020    3254 start.go:159] libmachine.API.Create for "kubernetes-upgrade-474000" (driver="qemu2")
	I1002 03:49:42.101049    3254 client.go:168] LocalClient.Create starting
	I1002 03:49:42.101104    3254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:49:42.101136    3254 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:42.101165    3254 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:42.101203    3254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:49:42.101216    3254 main.go:141] libmachine: Decoding PEM data...
	I1002 03:49:42.101223    3254 main.go:141] libmachine: Parsing certificate...
	I1002 03:49:42.101518    3254 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:49:42.255799    3254 main.go:141] libmachine: Creating SSH key...
	I1002 03:49:42.482719    3254 main.go:141] libmachine: Creating Disk image...
	I1002 03:49:42.482731    3254 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:49:42.482920    3254 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:42.492372    3254 main.go:141] libmachine: STDOUT: 
	I1002 03:49:42.492386    3254 main.go:141] libmachine: STDERR: 
	I1002 03:49:42.492441    3254 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2 +20000M
	I1002 03:49:42.500000    3254 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:49:42.500026    3254 main.go:141] libmachine: STDERR: 
	I1002 03:49:42.500039    3254 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:42.500044    3254 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:49:42.500085    3254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:48:94:a7:bb:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:42.501778    3254 main.go:141] libmachine: STDOUT: 
	I1002 03:49:42.501793    3254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:42.501808    3254 client.go:171] LocalClient.Create took 400.764125ms
	I1002 03:49:44.503934    3254 start.go:128] duration metric: createHost completed in 2.425182584s
	I1002 03:49:44.504004    3254 start.go:83] releasing machines lock for "kubernetes-upgrade-474000", held for 2.425341625s
	W1002 03:49:44.504374    3254 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-474000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-474000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:44.514084    3254 out.go:177] 
	W1002 03:49:44.518125    3254 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:49:44.518163    3254 out.go:239] * 
	* 
	W1002 03:49:44.520759    3254 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:49:44.529127    3254 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-474000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-474000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-474000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-474000 status --format={{.Host}}: exit status 7 (36.533083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-474000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-474000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.180687625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-474000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-474000 in cluster kubernetes-upgrade-474000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-474000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-474000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:49:44.707112    3285 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:49:44.707241    3285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:44.707245    3285 out.go:309] Setting ErrFile to fd 2...
	I1002 03:49:44.707247    3285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:44.707359    3285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:49:44.708419    3285 out.go:303] Setting JSON to false
	I1002 03:49:44.724381    3285 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1158,"bootTime":1696242626,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:49:44.724483    3285 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:49:44.728248    3285 out.go:177] * [kubernetes-upgrade-474000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:49:44.738196    3285 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:49:44.734245    3285 notify.go:220] Checking for updates...
	I1002 03:49:44.744182    3285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:49:44.747154    3285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:49:44.748499    3285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:49:44.751185    3285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:49:44.754241    3285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:49:44.757636    3285 config.go:182] Loaded profile config "kubernetes-upgrade-474000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 03:49:44.757907    3285 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:49:44.762180    3285 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:49:44.769188    3285 start.go:298] selected driver: qemu2
	I1002 03:49:44.769194    3285 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-474000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-474000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:44.769237    3285 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:49:44.771548    3285 cni.go:84] Creating CNI manager for ""
	I1002 03:49:44.771565    3285 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:49:44.771577    3285 start_flags.go:321] config:
	{Name:kubernetes-upgrade-474000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-474000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:44.775834    3285 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:49:44.783216    3285 out.go:177] * Starting control plane node kubernetes-upgrade-474000 in cluster kubernetes-upgrade-474000
	I1002 03:49:44.787152    3285 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:49:44.787165    3285 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:49:44.787171    3285 cache.go:57] Caching tarball of preloaded images
	I1002 03:49:44.787217    3285 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:49:44.787223    3285 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:49:44.787272    3285 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kubernetes-upgrade-474000/config.json ...
	I1002 03:49:44.787670    3285 start.go:365] acquiring machines lock for kubernetes-upgrade-474000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:44.787701    3285 start.go:369] acquired machines lock for "kubernetes-upgrade-474000" in 25.125µs
	I1002 03:49:44.787708    3285 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:49:44.787714    3285 fix.go:54] fixHost starting: 
	I1002 03:49:44.787831    3285 fix.go:102] recreateIfNeeded on kubernetes-upgrade-474000: state=Stopped err=<nil>
	W1002 03:49:44.787839    3285 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:49:44.795027    3285 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-474000" ...
	I1002 03:49:44.799205    3285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:48:94:a7:bb:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:44.801384    3285 main.go:141] libmachine: STDOUT: 
	I1002 03:49:44.801410    3285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:44.801439    3285 fix.go:56] fixHost completed within 13.726667ms
	I1002 03:49:44.801443    3285 start.go:83] releasing machines lock for "kubernetes-upgrade-474000", held for 13.738875ms
	W1002 03:49:44.801449    3285 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:49:44.801482    3285 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:44.801487    3285 start.go:703] Will try again in 5 seconds ...
	I1002 03:49:49.803647    3285 start.go:365] acquiring machines lock for kubernetes-upgrade-474000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:49:49.804143    3285 start.go:369] acquired machines lock for "kubernetes-upgrade-474000" in 355.708µs
	I1002 03:49:49.804352    3285 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:49:49.804373    3285 fix.go:54] fixHost starting: 
	I1002 03:49:49.805179    3285 fix.go:102] recreateIfNeeded on kubernetes-upgrade-474000: state=Stopped err=<nil>
	W1002 03:49:49.805206    3285 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:49:49.809709    3285 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-474000" ...
	I1002 03:49:49.816907    3285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:48:94:a7:bb:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubernetes-upgrade-474000/disk.qcow2
	I1002 03:49:49.826871    3285 main.go:141] libmachine: STDOUT: 
	I1002 03:49:49.826928    3285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:49:49.827011    3285 fix.go:56] fixHost completed within 22.639916ms
	I1002 03:49:49.827030    3285 start.go:83] releasing machines lock for "kubernetes-upgrade-474000", held for 22.84275ms
	W1002 03:49:49.827241    3285 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-474000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-474000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:49:49.835572    3285 out.go:177] 
	W1002 03:49:49.839737    3285 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:49:49.839764    3285 out.go:239] * 
	* 
	W1002 03:49:49.842192    3285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:49:49.849666    3285 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-474000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-474000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-474000 version --output=json: exit status 1 (64.48025ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-474000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-10-02 03:49:49.927707 -0700 PDT m=+890.334884001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-474000 -n kubernetes-upgrade-474000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-474000 -n kubernetes-upgrade-474000: exit status 7 (31.784291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-474000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-474000
--- FAIL: TestKubernetesUpgrade (15.37s)
E1002 03:50:03.390222    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:50:31.094355    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:50:51.226160    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (157.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (157.65s)

                                                
                                    
x
+
TestPause/serial/Start (9.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-619000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-619000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.721523084s)

                                                
                                                
-- stdout --
	* [pause-619000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-619000 in cluster pause-619000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-619000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-619000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-619000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-619000 -n pause-619000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-619000 -n pause-619000: exit status 7 (65.698083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-619000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-264000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-264000 --driver=qemu2 : exit status 80 (10.375561459s)

                                                
                                                
-- stdout --
	* [NoKubernetes-264000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-264000 in cluster NoKubernetes-264000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-264000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-264000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-264000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000: exit status 7 (65.637375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (3.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe: permission denied (5.718542ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe: permission denied (5.8695ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe: permission denied (5.594667ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.479660757.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (3.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-517000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-517000: exit status 85 (115.509417ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| node    | multinode-335000 node delete                         | multinode-335000          | jenkins | v1.31.2 | 02 Oct 23 03:48 PDT |                     |
	|         | m03                                                  |                           |         |         |                     |                     |
	| stop    | multinode-335000 stop                                | multinode-335000          | jenkins | v1.31.2 | 02 Oct 23 03:48 PDT | 02 Oct 23 03:48 PDT |
	| start   | -p multinode-335000                                  | multinode-335000          | jenkins | v1.31.2 | 02 Oct 23 03:48 PDT |                     |
	|         | --wait=true -v=8                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| node    | list -p multinode-335000                             | multinode-335000          | jenkins | v1.31.2 | 02 Oct 23 03:48 PDT |                     |
	| start   | -p multinode-335000-m01                              | multinode-335000-m01      | jenkins | v1.31.2 | 02 Oct 23 03:48 PDT |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p multinode-335000-m02                              | multinode-335000-m02      | jenkins | v1.31.2 | 02 Oct 23 03:48 PDT |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| node    | add -p multinode-335000                              | multinode-335000          | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	| delete  | -p multinode-335000-m02                              | multinode-335000-m02      | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| delete  | -p multinode-335000                                  | multinode-335000          | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| start   | -p test-preload-648000                               | test-preload-648000       | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=qemu2                       |                           |         |         |                     |                     |
	|         |  --kubernetes-version=v1.24.4                        |                           |         |         |                     |                     |
	| delete  | -p test-preload-648000                               | test-preload-648000       | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| start   | -p scheduled-stop-927000                             | scheduled-stop-927000     | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | --memory=2048 --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-927000                             | scheduled-stop-927000     | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| start   | -p skaffold-716000                                   | skaffold-716000           | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | --memory=2600 --driver=qemu2                         |                           |         |         |                     |                     |
	| delete  | -p skaffold-716000                                   | skaffold-716000           | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| start   | -p offline-docker-431000                             | offline-docker-431000     | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --memory=2048 --wait=true                            |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo crictl                         | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo crictl                         | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo find                           | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo ip a s                         | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	| ssh     | -p cilium-547000 sudo ip r s                         | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo iptables                       | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo docker                         | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo cat                            | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo                                | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo find                           | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-547000 sudo crio                           | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-547000                                     | cilium-547000             | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| start   | -p kubernetes-upgrade-474000                         | kubernetes-upgrade-474000 | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-431000                             | offline-docker-431000     | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| stop    | -p kubernetes-upgrade-474000                         | kubernetes-upgrade-474000 | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| start   | -p kubernetes-upgrade-474000                         | kubernetes-upgrade-474000 | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-474000                         | kubernetes-upgrade-474000 | jenkins | v1.31.2 | 02 Oct 23 03:49 PDT | 02 Oct 23 03:49 PDT |
	| delete  | -p running-upgrade-489000                            | running-upgrade-489000    | jenkins | v1.31.2 | 02 Oct 23 03:52 PDT | 02 Oct 23 03:52 PDT |
	| start   | -p pause-619000 --memory=2048                        | pause-619000              | jenkins | v1.31.2 | 02 Oct 23 03:52 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	| delete  | -p pause-619000                                      | pause-619000              | jenkins | v1.31.2 | 02 Oct 23 03:52 PDT | 02 Oct 23 03:52 PDT |
	| start   | -p NoKubernetes-264000                               | NoKubernetes-264000       | jenkins | v1.31.2 | 02 Oct 23 03:52 PDT |                     |
	|         | --no-kubernetes                                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                            |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-264000                               | NoKubernetes-264000       | jenkins | v1.31.2 | 02 Oct 23 03:52 PDT |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:52:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:52:17.919101    3390 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:52:17.919245    3390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:17.919247    3390 out.go:309] Setting ErrFile to fd 2...
	I1002 03:52:17.919249    3390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:52:17.919388    3390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:52:17.920366    3390 out.go:303] Setting JSON to false
	I1002 03:52:17.936204    3390 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1311,"bootTime":1696242626,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:52:17.936283    3390 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:52:17.941378    3390 out.go:177] * [NoKubernetes-264000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:52:17.949340    3390 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:52:17.953409    3390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:52:17.949400    3390 notify.go:220] Checking for updates...
	I1002 03:52:17.956444    3390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:52:17.958013    3390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:52:17.961400    3390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:52:17.964408    3390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:52:17.967844    3390 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:52:17.967886    3390 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:52:17.972413    3390 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:52:17.979395    3390 start.go:298] selected driver: qemu2
	I1002 03:52:17.979400    3390 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:52:17.979407    3390 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:52:17.979473    3390 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:52:17.982405    3390 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:52:17.987785    3390 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1002 03:52:17.987870    3390 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:52:17.987883    3390 cni.go:84] Creating CNI manager for ""
	I1002 03:52:17.987890    3390 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:52:17.987893    3390 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:52:17.987899    3390 start_flags.go:321] config:
	{Name:NoKubernetes-264000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:NoKubernetes-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:52:17.992492    3390 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:52:17.995397    3390 out.go:177] * Starting control plane node NoKubernetes-264000 in cluster NoKubernetes-264000
	I1002 03:52:18.003451    3390 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:52:18.003464    3390 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:52:18.003480    3390 cache.go:57] Caching tarball of preloaded images
	I1002 03:52:18.003552    3390 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:52:18.003556    3390 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:52:18.003626    3390 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/NoKubernetes-264000/config.json ...
	I1002 03:52:18.003635    3390 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/NoKubernetes-264000/config.json: {Name:mkb98b8e3f30de868bc9c23bf2479fe41ecead73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:52:18.003838    3390 start.go:365] acquiring machines lock for NoKubernetes-264000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:52:18.003869    3390 start.go:369] acquired machines lock for "NoKubernetes-264000" in 23.791µs
	I1002 03:52:18.003878    3390 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-264000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.2 ClusterName:NoKubernetes-264000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:52:18.003909    3390 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:52:18.011417    3390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1002 03:52:18.027320    3390 start.go:159] libmachine.API.Create for "NoKubernetes-264000" (driver="qemu2")
	I1002 03:52:18.027346    3390 client.go:168] LocalClient.Create starting
	I1002 03:52:18.027401    3390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:52:18.027422    3390 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:18.027431    3390 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:18.027466    3390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:52:18.027485    3390 main.go:141] libmachine: Decoding PEM data...
	I1002 03:52:18.027493    3390 main.go:141] libmachine: Parsing certificate...
	I1002 03:52:18.027826    3390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:52:18.141134    3390 main.go:141] libmachine: Creating SSH key...
	I1002 03:52:18.213832    3390 main.go:141] libmachine: Creating Disk image...
	I1002 03:52:18.213835    3390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:52:18.213999    3390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/disk.qcow2
	I1002 03:52:18.223116    3390 main.go:141] libmachine: STDOUT: 
	I1002 03:52:18.223126    3390 main.go:141] libmachine: STDERR: 
	I1002 03:52:18.223170    3390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/disk.qcow2 +20000M
	I1002 03:52:18.230640    3390 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:52:18.230647    3390 main.go:141] libmachine: STDERR: 
	I1002 03:52:18.230664    3390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/disk.qcow2
	I1002 03:52:18.230671    3390 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:52:18.230702    3390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:8e:54:c2:b4:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/NoKubernetes-264000/disk.qcow2
	I1002 03:52:18.232355    3390 main.go:141] libmachine: STDOUT: 
	I1002 03:52:18.232363    3390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:52:18.232382    3390 client.go:171] LocalClient.Create took 205.036709ms
	I1002 03:52:20.234555    3390 start.go:128] duration metric: createHost completed in 2.230675167s
	I1002 03:52:20.234587    3390 start.go:83] releasing machines lock for "NoKubernetes-264000", held for 2.230756666s
	W1002 03:52:20.234673    3390 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:20.246871    3390 out.go:177] * Deleting "NoKubernetes-264000" in qemu2 ...
	W1002 03:52:20.265936    3390 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:52:20.265965    3390 start.go:703] Will try again in 5 seconds ...
	
	* 
	* Profile "stopped-upgrade-517000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-517000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --driver=qemu2 : exit status 80 (5.25651925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-264000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-264000
	* Restarting existing qemu2 VM for "NoKubernetes-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000: exit status 7 (32.20325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.63s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --driver=qemu2 
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17340
- KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2034400791/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2827565s)

                                                
                                                
-- stdout --
	* [NoKubernetes-264000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-264000
	* Restarting existing qemu2 VM for "NoKubernetes-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000: exit status 7 (65.627583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17340
- KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2693996555/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-264000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-264000 --driver=qemu2 : exit status 80 (7.491241041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-264000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-264000
	* Restarting existing qemu2 VM for "NoKubernetes-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-264000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-264000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-264000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-264000 -n NoKubernetes-264000: exit status 7 (63.712625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-264000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (7.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.79925875s)

                                                
                                                
-- stdout --
	* [auto-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-547000 in cluster auto-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:53:14.248396    3631 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:53:14.248560    3631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:14.248563    3631 out.go:309] Setting ErrFile to fd 2...
	I1002 03:53:14.248566    3631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:14.248696    3631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:53:14.249751    3631 out.go:303] Setting JSON to false
	I1002 03:53:14.265665    3631 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1368,"bootTime":1696242626,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:53:14.265751    3631 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:53:14.271179    3631 out.go:177] * [auto-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:53:14.277227    3631 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:53:14.277282    3631 notify.go:220] Checking for updates...
	I1002 03:53:14.281229    3631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:53:14.284290    3631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:53:14.287229    3631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:53:14.290211    3631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:53:14.293229    3631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:53:14.294988    3631 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:14.295066    3631 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:14.295116    3631 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:53:14.299163    3631 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:53:14.305998    3631 start.go:298] selected driver: qemu2
	I1002 03:53:14.306004    3631 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:53:14.306011    3631 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:53:14.308343    3631 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:53:14.311263    3631 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:53:14.314345    3631 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:53:14.314367    3631 cni.go:84] Creating CNI manager for ""
	I1002 03:53:14.314384    3631 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:53:14.314388    3631 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:53:14.314393    3631 start_flags.go:321] config:
	{Name:auto-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I1002 03:53:14.319083    3631 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:53:14.326171    3631 out.go:177] * Starting control plane node auto-547000 in cluster auto-547000
	I1002 03:53:14.330269    3631 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:53:14.330284    3631 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:53:14.330299    3631 cache.go:57] Caching tarball of preloaded images
	I1002 03:53:14.330360    3631 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:53:14.330365    3631 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:53:14.330425    3631 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/auto-547000/config.json ...
	I1002 03:53:14.330437    3631 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/auto-547000/config.json: {Name:mkd58e18075b13576de356c388a02d8f0ff9f9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:53:14.330641    3631 start.go:365] acquiring machines lock for auto-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:14.330671    3631 start.go:369] acquired machines lock for "auto-547000" in 24.167µs
	I1002 03:53:14.330681    3631 start.go:93] Provisioning new machine with config: &{Name:auto-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:14.330712    3631 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:14.338220    3631 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:14.354999    3631 start.go:159] libmachine.API.Create for "auto-547000" (driver="qemu2")
	I1002 03:53:14.355025    3631 client.go:168] LocalClient.Create starting
	I1002 03:53:14.355090    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:14.355114    3631 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:14.355128    3631 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:14.355163    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:14.355182    3631 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:14.355188    3631 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:14.355516    3631 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:14.466580    3631 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:14.643080    3631 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:14.643088    3631 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:14.643277    3631 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2
	I1002 03:53:14.652631    3631 main.go:141] libmachine: STDOUT: 
	I1002 03:53:14.652646    3631 main.go:141] libmachine: STDERR: 
	I1002 03:53:14.652707    3631 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2 +20000M
	I1002 03:53:14.660238    3631 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:14.660254    3631 main.go:141] libmachine: STDERR: 
	I1002 03:53:14.660274    3631 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2
	I1002 03:53:14.660282    3631 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:14.660313    3631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5a:5d:bd:4c:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2
	I1002 03:53:14.661955    3631 main.go:141] libmachine: STDOUT: 
	I1002 03:53:14.661970    3631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:14.661994    3631 client.go:171] LocalClient.Create took 306.969625ms
	I1002 03:53:16.664148    3631 start.go:128] duration metric: createHost completed in 2.333457875s
	I1002 03:53:16.664225    3631 start.go:83] releasing machines lock for "auto-547000", held for 2.333591833s
	W1002 03:53:16.664273    3631 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:16.677403    3631 out.go:177] * Deleting "auto-547000" in qemu2 ...
	W1002 03:53:16.700749    3631 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:16.700778    3631 start.go:703] Will try again in 5 seconds ...
	I1002 03:53:21.703019    3631 start.go:365] acquiring machines lock for auto-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:21.703465    3631 start.go:369] acquired machines lock for "auto-547000" in 337.125µs
	I1002 03:53:21.703578    3631 start.go:93] Provisioning new machine with config: &{Name:auto-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:21.703857    3631 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:21.713537    3631 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:21.761943    3631 start.go:159] libmachine.API.Create for "auto-547000" (driver="qemu2")
	I1002 03:53:21.761980    3631 client.go:168] LocalClient.Create starting
	I1002 03:53:21.762080    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:21.762129    3631 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:21.762152    3631 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:21.762221    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:21.762256    3631 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:21.762267    3631 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:21.762807    3631 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:21.887770    3631 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:21.959959    3631 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:21.959965    3631 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:21.960142    3631 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2
	I1002 03:53:21.969374    3631 main.go:141] libmachine: STDOUT: 
	I1002 03:53:21.969388    3631 main.go:141] libmachine: STDERR: 
	I1002 03:53:21.969461    3631 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2 +20000M
	I1002 03:53:21.976973    3631 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:21.977004    3631 main.go:141] libmachine: STDERR: 
	I1002 03:53:21.977021    3631 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2
	I1002 03:53:21.977030    3631 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:21.977057    3631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:68:23:44:cf:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/auto-547000/disk.qcow2
	I1002 03:53:21.978735    3631 main.go:141] libmachine: STDOUT: 
	I1002 03:53:21.978754    3631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:21.978768    3631 client.go:171] LocalClient.Create took 216.787625ms
	I1002 03:53:23.980901    3631 start.go:128] duration metric: createHost completed in 2.277049292s
	I1002 03:53:23.980974    3631 start.go:83] releasing machines lock for "auto-547000", held for 2.2775345s
	W1002 03:53:23.981412    3631 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:23.991006    3631 out.go:177] 
	W1002 03:53:23.994997    3631 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:53:23.995025    3631 out.go:239] * 
	* 
	W1002 03:53:23.997861    3631 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:53:24.006983    3631 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E1002 03:53:35.065407    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/ingress-addon-legacy-545000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.914449542s)

                                                
                                                
-- stdout --
	* [kindnet-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-547000 in cluster kindnet-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:53:26.088810    3749 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:53:26.088978    3749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:26.088981    3749 out.go:309] Setting ErrFile to fd 2...
	I1002 03:53:26.088984    3749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:26.089125    3749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:53:26.090171    3749 out.go:303] Setting JSON to false
	I1002 03:53:26.106119    3749 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1380,"bootTime":1696242626,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:53:26.106214    3749 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:53:26.111274    3749 out.go:177] * [kindnet-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:53:26.118188    3749 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:53:26.118254    3749 notify.go:220] Checking for updates...
	I1002 03:53:26.125086    3749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:53:26.128170    3749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:53:26.131150    3749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:53:26.134063    3749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:53:26.137172    3749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:53:26.140550    3749 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:26.140612    3749 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:26.140651    3749 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:53:26.144091    3749 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:53:26.151178    3749 start.go:298] selected driver: qemu2
	I1002 03:53:26.151187    3749 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:53:26.151194    3749 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:53:26.153550    3749 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:53:26.157135    3749 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:53:26.160231    3749 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:53:26.160260    3749 cni.go:84] Creating CNI manager for "kindnet"
	I1002 03:53:26.160265    3749 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 03:53:26.160273    3749 start_flags.go:321] config:
	{Name:kindnet-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:53:26.164751    3749 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:53:26.172172    3749 out.go:177] * Starting control plane node kindnet-547000 in cluster kindnet-547000
	I1002 03:53:26.176190    3749 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:53:26.176205    3749 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:53:26.176216    3749 cache.go:57] Caching tarball of preloaded images
	I1002 03:53:26.176280    3749 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:53:26.176286    3749 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:53:26.176375    3749 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kindnet-547000/config.json ...
	I1002 03:53:26.176388    3749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kindnet-547000/config.json: {Name:mk8a38809008b90d37d6ccd38e3877d416685cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:53:26.176562    3749 start.go:365] acquiring machines lock for kindnet-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:26.176593    3749 start.go:369] acquired machines lock for "kindnet-547000" in 25.166µs
	I1002 03:53:26.176602    3749 start.go:93] Provisioning new machine with config: &{Name:kindnet-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:26.176631    3749 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:26.184147    3749 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:26.199903    3749 start.go:159] libmachine.API.Create for "kindnet-547000" (driver="qemu2")
	I1002 03:53:26.199934    3749 client.go:168] LocalClient.Create starting
	I1002 03:53:26.199984    3749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:26.200011    3749 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:26.200020    3749 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:26.200054    3749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:26.200071    3749 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:26.200078    3749 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:26.200442    3749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:26.311604    3749 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:26.537603    3749 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:26.537613    3749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:26.537838    3749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2
	I1002 03:53:26.547307    3749 main.go:141] libmachine: STDOUT: 
	I1002 03:53:26.547330    3749 main.go:141] libmachine: STDERR: 
	I1002 03:53:26.547391    3749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2 +20000M
	I1002 03:53:26.555018    3749 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:26.555029    3749 main.go:141] libmachine: STDERR: 
	I1002 03:53:26.555049    3749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2
	I1002 03:53:26.555056    3749 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:26.555094    3749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:33:b4:dd:e3:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2
	I1002 03:53:26.556773    3749 main.go:141] libmachine: STDOUT: 
	I1002 03:53:26.556785    3749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:26.556803    3749 client.go:171] LocalClient.Create took 356.873ms
	I1002 03:53:28.558964    3749 start.go:128] duration metric: createHost completed in 2.382348708s
	I1002 03:53:28.559064    3749 start.go:83] releasing machines lock for "kindnet-547000", held for 2.382513084s
	W1002 03:53:28.559181    3749 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:28.571517    3749 out.go:177] * Deleting "kindnet-547000" in qemu2 ...
	W1002 03:53:28.592281    3749 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:28.592316    3749 start.go:703] Will try again in 5 seconds ...
	I1002 03:53:33.594391    3749 start.go:365] acquiring machines lock for kindnet-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:33.594861    3749 start.go:369] acquired machines lock for "kindnet-547000" in 385.792µs
	I1002 03:53:33.595004    3749 start.go:93] Provisioning new machine with config: &{Name:kindnet-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:33.595270    3749 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:33.607914    3749 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:33.657603    3749 start.go:159] libmachine.API.Create for "kindnet-547000" (driver="qemu2")
	I1002 03:53:33.657660    3749 client.go:168] LocalClient.Create starting
	I1002 03:53:33.657771    3749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:33.657826    3749 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:33.657849    3749 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:33.657913    3749 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:33.657945    3749 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:33.657958    3749 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:33.658481    3749 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:33.781667    3749 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:33.916975    3749 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:33.916982    3749 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:33.917167    3749 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2
	I1002 03:53:33.926433    3749 main.go:141] libmachine: STDOUT: 
	I1002 03:53:33.926453    3749 main.go:141] libmachine: STDERR: 
	I1002 03:53:33.926528    3749 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2 +20000M
	I1002 03:53:33.934070    3749 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:33.934083    3749 main.go:141] libmachine: STDERR: 
	I1002 03:53:33.934100    3749 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2
	I1002 03:53:33.934109    3749 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:33.934154    3749 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ac:cb:29:8e:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kindnet-547000/disk.qcow2
	I1002 03:53:33.935783    3749 main.go:141] libmachine: STDOUT: 
	I1002 03:53:33.935795    3749 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:33.935812    3749 client.go:171] LocalClient.Create took 278.150333ms
	I1002 03:53:35.938032    3749 start.go:128] duration metric: createHost completed in 2.342783917s
	I1002 03:53:35.938124    3749 start.go:83] releasing machines lock for "kindnet-547000", held for 2.34328525s
	W1002 03:53:35.938523    3749 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:35.948223    3749 out.go:177] 
	W1002 03:53:35.952241    3749 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:53:35.952264    3749 out.go:239] * 
	* 
	W1002 03:53:35.955372    3749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:53:35.963218    3749 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.861898542s)

                                                
                                                
-- stdout --
	* [calico-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-547000 in cluster calico-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:53:38.155730    3865 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:53:38.155887    3865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:38.155891    3865 out.go:309] Setting ErrFile to fd 2...
	I1002 03:53:38.155893    3865 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:38.156039    3865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:53:38.157069    3865 out.go:303] Setting JSON to false
	I1002 03:53:38.173109    3865 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1392,"bootTime":1696242626,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:53:38.173207    3865 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:53:38.177649    3865 out.go:177] * [calico-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:53:38.188568    3865 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:53:38.184674    3865 notify.go:220] Checking for updates...
	I1002 03:53:38.194625    3865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:53:38.197644    3865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:53:38.200555    3865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:53:38.203654    3865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:53:38.206496    3865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:53:38.209982    3865 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:38.210048    3865 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:38.210089    3865 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:53:38.214565    3865 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:53:38.221623    3865 start.go:298] selected driver: qemu2
	I1002 03:53:38.221631    3865 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:53:38.221639    3865 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:53:38.224071    3865 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:53:38.226653    3865 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:53:38.229668    3865 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:53:38.229688    3865 cni.go:84] Creating CNI manager for "calico"
	I1002 03:53:38.229693    3865 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I1002 03:53:38.229701    3865 start_flags.go:321] config:
	{Name:calico-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:calico-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1002 03:53:38.234387    3865 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:53:38.241654    3865 out.go:177] * Starting control plane node calico-547000 in cluster calico-547000
	I1002 03:53:38.245616    3865 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:53:38.245631    3865 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:53:38.245648    3865 cache.go:57] Caching tarball of preloaded images
	I1002 03:53:38.245715    3865 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:53:38.245721    3865 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:53:38.245802    3865 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/calico-547000/config.json ...
	I1002 03:53:38.245813    3865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/calico-547000/config.json: {Name:mk2b0e60b51e171fd3ad8f6f645779a7d741513e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:53:38.246024    3865 start.go:365] acquiring machines lock for calico-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:38.246053    3865 start.go:369] acquired machines lock for "calico-547000" in 23.625µs
	I1002 03:53:38.246065    3865 start.go:93] Provisioning new machine with config: &{Name:calico-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:38.246096    3865 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:38.254511    3865 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:38.271326    3865 start.go:159] libmachine.API.Create for "calico-547000" (driver="qemu2")
	I1002 03:53:38.271356    3865 client.go:168] LocalClient.Create starting
	I1002 03:53:38.271416    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:38.271449    3865 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:38.271458    3865 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:38.271501    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:38.271519    3865 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:38.271527    3865 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:38.271843    3865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:38.382933    3865 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:38.573206    3865 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:38.573219    3865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:38.573453    3865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2
	I1002 03:53:38.582937    3865 main.go:141] libmachine: STDOUT: 
	I1002 03:53:38.582961    3865 main.go:141] libmachine: STDERR: 
	I1002 03:53:38.583034    3865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2 +20000M
	I1002 03:53:38.590730    3865 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:38.590746    3865 main.go:141] libmachine: STDERR: 
	I1002 03:53:38.590761    3865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2
	I1002 03:53:38.590767    3865 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:38.590810    3865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ae:e3:b1:74:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2
	I1002 03:53:38.592560    3865 main.go:141] libmachine: STDOUT: 
	I1002 03:53:38.592574    3865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:38.592592    3865 client.go:171] LocalClient.Create took 321.238583ms
	I1002 03:53:40.594736    3865 start.go:128] duration metric: createHost completed in 2.348663125s
	I1002 03:53:40.594827    3865 start.go:83] releasing machines lock for "calico-547000", held for 2.348813708s
	W1002 03:53:40.594886    3865 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:40.606151    3865 out.go:177] * Deleting "calico-547000" in qemu2 ...
	W1002 03:53:40.627306    3865 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:40.627336    3865 start.go:703] Will try again in 5 seconds ...
	I1002 03:53:45.629475    3865 start.go:365] acquiring machines lock for calico-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:45.629773    3865 start.go:369] acquired machines lock for "calico-547000" in 229.667µs
	I1002 03:53:45.629835    3865 start.go:93] Provisioning new machine with config: &{Name:calico-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:45.630039    3865 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:45.635106    3865 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:45.678841    3865 start.go:159] libmachine.API.Create for "calico-547000" (driver="qemu2")
	I1002 03:53:45.678900    3865 client.go:168] LocalClient.Create starting
	I1002 03:53:45.679001    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:45.679058    3865 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:45.679078    3865 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:45.679152    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:45.679213    3865 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:45.679228    3865 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:45.679742    3865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:45.804388    3865 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:45.925123    3865 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:45.925129    3865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:45.925311    3865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2
	I1002 03:53:45.934368    3865 main.go:141] libmachine: STDOUT: 
	I1002 03:53:45.934385    3865 main.go:141] libmachine: STDERR: 
	I1002 03:53:45.934446    3865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2 +20000M
	I1002 03:53:45.943129    3865 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:45.943148    3865 main.go:141] libmachine: STDERR: 
	I1002 03:53:45.943159    3865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2
	I1002 03:53:45.943166    3865 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:45.943198    3865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:77:b1:fc:b9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/calico-547000/disk.qcow2
	I1002 03:53:45.944962    3865 main.go:141] libmachine: STDOUT: 
	I1002 03:53:45.944975    3865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:45.944989    3865 client.go:171] LocalClient.Create took 266.0895ms
	I1002 03:53:47.947132    3865 start.go:128] duration metric: createHost completed in 2.317118834s
	I1002 03:53:47.947199    3865 start.go:83] releasing machines lock for "calico-547000", held for 2.317458917s
	W1002 03:53:47.947564    3865 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:47.960219    3865 out.go:177] 
	W1002 03:53:47.964391    3865 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:53:47.964417    3865 out.go:239] * 
	* 
	W1002 03:53:47.966938    3865 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:53:47.977317    3865 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.733046666s)

                                                
                                                
-- stdout --
	* [custom-flannel-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-547000 in cluster custom-flannel-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:53:50.318992    3985 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:53:50.319131    3985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:50.319135    3985 out.go:309] Setting ErrFile to fd 2...
	I1002 03:53:50.319137    3985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:53:50.319268    3985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:53:50.320317    3985 out.go:303] Setting JSON to false
	I1002 03:53:50.336212    3985 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1404,"bootTime":1696242626,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:53:50.336304    3985 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:53:50.341508    3985 out.go:177] * [custom-flannel-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:53:50.347461    3985 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:53:50.351423    3985 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:53:50.347535    3985 notify.go:220] Checking for updates...
	I1002 03:53:50.357350    3985 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:53:50.360431    3985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:53:50.363264    3985 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:53:50.366360    3985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:53:50.369790    3985 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:50.369856    3985 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:53:50.369898    3985 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:53:50.373287    3985 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:53:50.380345    3985 start.go:298] selected driver: qemu2
	I1002 03:53:50.380351    3985 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:53:50.380356    3985 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:53:50.382614    3985 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:53:50.384188    3985 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:53:50.387451    3985 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:53:50.387471    3985 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1002 03:53:50.387479    3985 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1002 03:53:50.387485    3985 start_flags.go:321] config:
	{Name:custom-flannel-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:53:50.391654    3985 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:53:50.398414    3985 out.go:177] * Starting control plane node custom-flannel-547000 in cluster custom-flannel-547000
	I1002 03:53:50.402393    3985 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:53:50.402408    3985 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:53:50.402422    3985 cache.go:57] Caching tarball of preloaded images
	I1002 03:53:50.402479    3985 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:53:50.402485    3985 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:53:50.402545    3985 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/custom-flannel-547000/config.json ...
	I1002 03:53:50.402556    3985 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/custom-flannel-547000/config.json: {Name:mk415ba6a69b5aefb34937754221460fe2f3dce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:53:50.402757    3985 start.go:365] acquiring machines lock for custom-flannel-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:50.402786    3985 start.go:369] acquired machines lock for "custom-flannel-547000" in 22.625µs
	I1002 03:53:50.402796    3985 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:50.402821    3985 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:50.411368    3985 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:50.427157    3985 start.go:159] libmachine.API.Create for "custom-flannel-547000" (driver="qemu2")
	I1002 03:53:50.427183    3985 client.go:168] LocalClient.Create starting
	I1002 03:53:50.427265    3985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:50.427286    3985 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:50.427296    3985 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:50.427327    3985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:50.427345    3985 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:50.427353    3985 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:50.427675    3985 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:50.539425    3985 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:50.616718    3985 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:50.616728    3985 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:50.616952    3985 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2
	I1002 03:53:50.625837    3985 main.go:141] libmachine: STDOUT: 
	I1002 03:53:50.625852    3985 main.go:141] libmachine: STDERR: 
	I1002 03:53:50.625906    3985 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2 +20000M
	I1002 03:53:50.633403    3985 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:50.633429    3985 main.go:141] libmachine: STDERR: 
	I1002 03:53:50.633444    3985 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2
	I1002 03:53:50.633454    3985 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:50.633500    3985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:5e:9f:fc:3b:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2
	I1002 03:53:50.635102    3985 main.go:141] libmachine: STDOUT: 
	I1002 03:53:50.635113    3985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:50.635131    3985 client.go:171] LocalClient.Create took 207.948208ms
	I1002 03:53:52.637285    3985 start.go:128] duration metric: createHost completed in 2.234488792s
	I1002 03:53:52.637349    3985 start.go:83] releasing machines lock for "custom-flannel-547000", held for 2.234600958s
	W1002 03:53:52.637399    3985 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:52.645751    3985 out.go:177] * Deleting "custom-flannel-547000" in qemu2 ...
	W1002 03:53:52.666453    3985 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:52.666493    3985 start.go:703] Will try again in 5 seconds ...
	I1002 03:53:57.668613    3985 start.go:365] acquiring machines lock for custom-flannel-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:53:57.669088    3985 start.go:369] acquired machines lock for "custom-flannel-547000" in 383.666µs
	I1002 03:53:57.669225    3985 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:53:57.669518    3985 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:53:57.675287    3985 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:53:57.726016    3985 start.go:159] libmachine.API.Create for "custom-flannel-547000" (driver="qemu2")
	I1002 03:53:57.726075    3985 client.go:168] LocalClient.Create starting
	I1002 03:53:57.726181    3985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:53:57.726246    3985 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:57.726270    3985 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:57.726335    3985 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:53:57.726382    3985 main.go:141] libmachine: Decoding PEM data...
	I1002 03:53:57.726403    3985 main.go:141] libmachine: Parsing certificate...
	I1002 03:53:57.727041    3985 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:53:57.853468    3985 main.go:141] libmachine: Creating SSH key...
	I1002 03:53:57.961147    3985 main.go:141] libmachine: Creating Disk image...
	I1002 03:53:57.961152    3985 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:53:57.961323    3985 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2
	I1002 03:53:57.970145    3985 main.go:141] libmachine: STDOUT: 
	I1002 03:53:57.970160    3985 main.go:141] libmachine: STDERR: 
	I1002 03:53:57.970220    3985 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2 +20000M
	I1002 03:53:57.977626    3985 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:53:57.977639    3985 main.go:141] libmachine: STDERR: 
	I1002 03:53:57.977652    3985 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2
	I1002 03:53:57.977659    3985 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:53:57.977708    3985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:dc:54:c2:8c:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/custom-flannel-547000/disk.qcow2
	I1002 03:53:57.979373    3985 main.go:141] libmachine: STDOUT: 
	I1002 03:53:57.979384    3985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:53:57.979394    3985 client.go:171] LocalClient.Create took 253.319958ms
	I1002 03:53:59.981549    3985 start.go:128] duration metric: createHost completed in 2.312042667s
	I1002 03:53:59.981647    3985 start.go:83] releasing machines lock for "custom-flannel-547000", held for 2.312576125s
	W1002 03:53:59.982172    3985 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:53:59.993948    3985 out.go:177] 
	W1002 03:53:59.997995    3985 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:53:59.998065    3985 out.go:239] * 
	* 
	W1002 03:54:00.001075    3985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:54:00.012926    3985 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.701508833s)

                                                
                                                
-- stdout --
	* [false-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-547000 in cluster false-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:54:02.337974    4108 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:54:02.338142    4108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:02.338145    4108 out.go:309] Setting ErrFile to fd 2...
	I1002 03:54:02.338147    4108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:02.338277    4108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:54:02.339307    4108 out.go:303] Setting JSON to false
	I1002 03:54:02.355379    4108 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1416,"bootTime":1696242626,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:54:02.355460    4108 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:54:02.360737    4108 out.go:177] * [false-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:54:02.368681    4108 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:54:02.368731    4108 notify.go:220] Checking for updates...
	I1002 03:54:02.372597    4108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:54:02.375637    4108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:54:02.378651    4108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:54:02.381627    4108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:54:02.384555    4108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:54:02.387933    4108 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:02.388000    4108 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:02.388049    4108 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:54:02.392652    4108 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:54:02.399530    4108 start.go:298] selected driver: qemu2
	I1002 03:54:02.399537    4108 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:54:02.399543    4108 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:54:02.401912    4108 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:54:02.404646    4108 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:54:02.407626    4108 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:54:02.407653    4108 cni.go:84] Creating CNI manager for "false"
	I1002 03:54:02.407658    4108 start_flags.go:321] config:
	{Name:false-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:false-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I1002 03:54:02.412191    4108 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:54:02.419638    4108 out.go:177] * Starting control plane node false-547000 in cluster false-547000
	I1002 03:54:02.423611    4108 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:54:02.423629    4108 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:54:02.423641    4108 cache.go:57] Caching tarball of preloaded images
	I1002 03:54:02.423702    4108 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:54:02.423710    4108 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:54:02.423785    4108 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/false-547000/config.json ...
	I1002 03:54:02.423798    4108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/false-547000/config.json: {Name:mk40f7a531d0ca7e3cc1f8253395ab2ecbe60eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:54:02.424003    4108 start.go:365] acquiring machines lock for false-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:02.424034    4108 start.go:369] acquired machines lock for "false-547000" in 24.708µs
	I1002 03:54:02.424045    4108 start.go:93] Provisioning new machine with config: &{Name:false-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:02.424080    4108 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:02.432563    4108 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:02.449624    4108 start.go:159] libmachine.API.Create for "false-547000" (driver="qemu2")
	I1002 03:54:02.449655    4108 client.go:168] LocalClient.Create starting
	I1002 03:54:02.449720    4108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:02.449751    4108 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:02.449760    4108 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:02.449803    4108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:02.449823    4108 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:02.449831    4108 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:02.450208    4108 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:02.561911    4108 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:02.612424    4108 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:02.612433    4108 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:02.612601    4108 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2
	I1002 03:54:02.621337    4108 main.go:141] libmachine: STDOUT: 
	I1002 03:54:02.621353    4108 main.go:141] libmachine: STDERR: 
	I1002 03:54:02.621399    4108 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2 +20000M
	I1002 03:54:02.628829    4108 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:02.628842    4108 main.go:141] libmachine: STDERR: 
	I1002 03:54:02.628856    4108 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2
	I1002 03:54:02.628861    4108 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:02.628892    4108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:1a:b0:4b:3f:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2
	I1002 03:54:02.630467    4108 main.go:141] libmachine: STDOUT: 
	I1002 03:54:02.630482    4108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:02.630501    4108 client.go:171] LocalClient.Create took 180.843459ms
	I1002 03:54:04.632641    4108 start.go:128] duration metric: createHost completed in 2.208584542s
	I1002 03:54:04.632717    4108 start.go:83] releasing machines lock for "false-547000", held for 2.208720709s
	W1002 03:54:04.632808    4108 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:04.646797    4108 out.go:177] * Deleting "false-547000" in qemu2 ...
	W1002 03:54:04.669558    4108 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:04.669599    4108 start.go:703] Will try again in 5 seconds ...
	I1002 03:54:09.671789    4108 start.go:365] acquiring machines lock for false-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:09.672267    4108 start.go:369] acquired machines lock for "false-547000" in 351.625µs
	I1002 03:54:09.672405    4108 start.go:93] Provisioning new machine with config: &{Name:false-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:09.672603    4108 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:09.683332    4108 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:09.731273    4108 start.go:159] libmachine.API.Create for "false-547000" (driver="qemu2")
	I1002 03:54:09.731324    4108 client.go:168] LocalClient.Create starting
	I1002 03:54:09.731429    4108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:09.731481    4108 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:09.731509    4108 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:09.731566    4108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:09.731600    4108 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:09.731616    4108 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:09.732091    4108 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:09.856472    4108 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:09.952861    4108 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:09.952870    4108 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:09.953048    4108 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2
	I1002 03:54:09.961917    4108 main.go:141] libmachine: STDOUT: 
	I1002 03:54:09.961935    4108 main.go:141] libmachine: STDERR: 
	I1002 03:54:09.962005    4108 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2 +20000M
	I1002 03:54:09.969674    4108 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:09.969688    4108 main.go:141] libmachine: STDERR: 
	I1002 03:54:09.969701    4108 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2
	I1002 03:54:09.969711    4108 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:09.969751    4108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:aa:a3:41:dd:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/false-547000/disk.qcow2
	I1002 03:54:09.971439    4108 main.go:141] libmachine: STDOUT: 
	I1002 03:54:09.971458    4108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:09.971473    4108 client.go:171] LocalClient.Create took 240.145458ms
	I1002 03:54:11.973643    4108 start.go:128] duration metric: createHost completed in 2.301043791s
	I1002 03:54:11.973705    4108 start.go:83] releasing machines lock for "false-547000", held for 2.301460666s
	W1002 03:54:11.974112    4108 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:11.982774    4108 out.go:177] 
	W1002 03:54:11.987849    4108 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:54:11.987874    4108 out.go:239] * 
	* 
	W1002 03:54:11.990496    4108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:54:11.999717    4108 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.699488417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-547000 in cluster enable-default-cni-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:54:14.152653    4223 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:54:14.152785    4223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:14.152788    4223 out.go:309] Setting ErrFile to fd 2...
	I1002 03:54:14.152792    4223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:14.152936    4223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:54:14.154012    4223 out.go:303] Setting JSON to false
	I1002 03:54:14.169964    4223 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1428,"bootTime":1696242626,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:54:14.170067    4223 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:54:14.178461    4223 out.go:177] * [enable-default-cni-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:54:14.182286    4223 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:54:14.186483    4223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:54:14.182373    4223 notify.go:220] Checking for updates...
	I1002 03:54:14.190807    4223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:54:14.193446    4223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:54:14.196481    4223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:54:14.199443    4223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:54:14.202782    4223 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:14.202854    4223 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:14.202900    4223 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:54:14.207418    4223 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:54:14.214421    4223 start.go:298] selected driver: qemu2
	I1002 03:54:14.214427    4223 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:54:14.214433    4223 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:54:14.216800    4223 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:54:14.219490    4223 out.go:177] * Automatically selected the socket_vmnet network
	E1002 03:54:14.222493    4223 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1002 03:54:14.222505    4223 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:54:14.222525    4223 cni.go:84] Creating CNI manager for "bridge"
	I1002 03:54:14.222530    4223 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:54:14.222535    4223 start_flags.go:321] config:
	{Name:enable-default-cni-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:54:14.227156    4223 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:54:14.234390    4223 out.go:177] * Starting control plane node enable-default-cni-547000 in cluster enable-default-cni-547000
	I1002 03:54:14.237423    4223 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:54:14.237437    4223 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:54:14.237455    4223 cache.go:57] Caching tarball of preloaded images
	I1002 03:54:14.237511    4223 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:54:14.237516    4223 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:54:14.237600    4223 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/enable-default-cni-547000/config.json ...
	I1002 03:54:14.237612    4223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/enable-default-cni-547000/config.json: {Name:mkb0657c39a7c48f2c345039b65b4d12c9a1abb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:54:14.237850    4223 start.go:365] acquiring machines lock for enable-default-cni-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:14.237882    4223 start.go:369] acquired machines lock for "enable-default-cni-547000" in 23.334µs
	I1002 03:54:14.237891    4223 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:14.237936    4223 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:14.242484    4223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:14.258429    4223 start.go:159] libmachine.API.Create for "enable-default-cni-547000" (driver="qemu2")
	I1002 03:54:14.258449    4223 client.go:168] LocalClient.Create starting
	I1002 03:54:14.258498    4223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:14.258523    4223 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:14.258535    4223 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:14.258572    4223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:14.258590    4223 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:14.258597    4223 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:14.258920    4223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:14.370738    4223 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:14.450478    4223 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:14.450483    4223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:14.450913    4223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2
	I1002 03:54:14.460139    4223 main.go:141] libmachine: STDOUT: 
	I1002 03:54:14.460165    4223 main.go:141] libmachine: STDERR: 
	I1002 03:54:14.460224    4223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2 +20000M
	I1002 03:54:14.467835    4223 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:14.467860    4223 main.go:141] libmachine: STDERR: 
	I1002 03:54:14.467890    4223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2
	I1002 03:54:14.467897    4223 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:14.467934    4223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:71:45:bb:f4:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2
	I1002 03:54:14.469592    4223 main.go:141] libmachine: STDOUT: 
	I1002 03:54:14.469611    4223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:14.469635    4223 client.go:171] LocalClient.Create took 211.187625ms
	I1002 03:54:16.471767    4223 start.go:128] duration metric: createHost completed in 2.233855s
	I1002 03:54:16.471841    4223 start.go:83] releasing machines lock for "enable-default-cni-547000", held for 2.233996792s
	W1002 03:54:16.471888    4223 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:16.480072    4223 out.go:177] * Deleting "enable-default-cni-547000" in qemu2 ...
	W1002 03:54:16.503942    4223 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:16.503979    4223 start.go:703] Will try again in 5 seconds ...
	I1002 03:54:21.506194    4223 start.go:365] acquiring machines lock for enable-default-cni-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:21.506636    4223 start.go:369] acquired machines lock for "enable-default-cni-547000" in 278.792µs
	I1002 03:54:21.506754    4223 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:21.507003    4223 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:21.512774    4223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:21.560451    4223 start.go:159] libmachine.API.Create for "enable-default-cni-547000" (driver="qemu2")
	I1002 03:54:21.560504    4223 client.go:168] LocalClient.Create starting
	I1002 03:54:21.560603    4223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:21.560648    4223 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:21.560668    4223 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:21.560727    4223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:21.560762    4223 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:21.560773    4223 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:21.561282    4223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:21.685794    4223 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:21.759790    4223 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:21.759796    4223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:21.759969    4223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2
	I1002 03:54:21.768854    4223 main.go:141] libmachine: STDOUT: 
	I1002 03:54:21.768867    4223 main.go:141] libmachine: STDERR: 
	I1002 03:54:21.768923    4223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2 +20000M
	I1002 03:54:21.776330    4223 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:21.776345    4223 main.go:141] libmachine: STDERR: 
	I1002 03:54:21.776359    4223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2
	I1002 03:54:21.776374    4223 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:21.776416    4223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:cc:0a:26:af:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/enable-default-cni-547000/disk.qcow2
	I1002 03:54:21.778172    4223 main.go:141] libmachine: STDOUT: 
	I1002 03:54:21.778186    4223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:21.778201    4223 client.go:171] LocalClient.Create took 217.696167ms
	I1002 03:54:23.780374    4223 start.go:128] duration metric: createHost completed in 2.273381834s
	I1002 03:54:23.780468    4223 start.go:83] releasing machines lock for "enable-default-cni-547000", held for 2.273854584s
	W1002 03:54:23.780920    4223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:23.793713    4223 out.go:177] 
	W1002 03:54:23.798822    4223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:54:23.798846    4223 out.go:239] * 
	* 
	W1002 03:54:23.801345    4223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:54:23.812702    4223 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.6902545s)

                                                
                                                
-- stdout --
	* [flannel-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-547000 in cluster flannel-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:54:25.983799    4337 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:54:25.983936    4337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:25.983938    4337 out.go:309] Setting ErrFile to fd 2...
	I1002 03:54:25.983941    4337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:25.984070    4337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:54:25.985096    4337 out.go:303] Setting JSON to false
	I1002 03:54:26.000909    4337 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1439,"bootTime":1696242626,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:54:26.000985    4337 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:54:26.006312    4337 out.go:177] * [flannel-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:54:26.016301    4337 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:54:26.012281    4337 notify.go:220] Checking for updates...
	I1002 03:54:26.023279    4337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:54:26.026303    4337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:54:26.029196    4337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:54:26.032316    4337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:54:26.035308    4337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:54:26.036970    4337 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:26.037039    4337 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:26.037081    4337 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:54:26.041276    4337 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:54:26.048150    4337 start.go:298] selected driver: qemu2
	I1002 03:54:26.048159    4337 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:54:26.048167    4337 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:54:26.050587    4337 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:54:26.053310    4337 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:54:26.056325    4337 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:54:26.056344    4337 cni.go:84] Creating CNI manager for "flannel"
	I1002 03:54:26.056347    4337 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1002 03:54:26.056354    4337 start_flags.go:321] config:
	{Name:flannel-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:flannel-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:54:26.061081    4337 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:54:26.075959    4337 out.go:177] * Starting control plane node flannel-547000 in cluster flannel-547000
	I1002 03:54:26.080292    4337 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:54:26.080308    4337 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:54:26.080326    4337 cache.go:57] Caching tarball of preloaded images
	I1002 03:54:26.080388    4337 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:54:26.080394    4337 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:54:26.080462    4337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/flannel-547000/config.json ...
	I1002 03:54:26.080480    4337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/flannel-547000/config.json: {Name:mkf2bc49ebb30057b496f958237fd978c44d9093 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:54:26.080681    4337 start.go:365] acquiring machines lock for flannel-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:26.080712    4337 start.go:369] acquired machines lock for "flannel-547000" in 25.375µs
	I1002 03:54:26.080722    4337 start.go:93] Provisioning new machine with config: &{Name:flannel-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:26.080755    4337 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:26.089279    4337 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:26.107156    4337 start.go:159] libmachine.API.Create for "flannel-547000" (driver="qemu2")
	I1002 03:54:26.107187    4337 client.go:168] LocalClient.Create starting
	I1002 03:54:26.107239    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:26.107270    4337 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:26.107282    4337 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:26.107320    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:26.107339    4337 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:26.107349    4337 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:26.107714    4337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:26.219775    4337 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:26.264559    4337 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:26.264565    4337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:26.264725    4337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2
	I1002 03:54:26.273626    4337 main.go:141] libmachine: STDOUT: 
	I1002 03:54:26.273642    4337 main.go:141] libmachine: STDERR: 
	I1002 03:54:26.273703    4337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2 +20000M
	I1002 03:54:26.281079    4337 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:26.281090    4337 main.go:141] libmachine: STDERR: 
	I1002 03:54:26.281109    4337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2
	I1002 03:54:26.281116    4337 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:26.281153    4337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:eb:05:49:d7:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2
	I1002 03:54:26.282690    4337 main.go:141] libmachine: STDOUT: 
	I1002 03:54:26.282703    4337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:26.282721    4337 client.go:171] LocalClient.Create took 175.533666ms
	I1002 03:54:28.284850    4337 start.go:128] duration metric: createHost completed in 2.204121667s
	I1002 03:54:28.284924    4337 start.go:83] releasing machines lock for "flannel-547000", held for 2.2042475s
	W1002 03:54:28.285006    4337 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:28.293150    4337 out.go:177] * Deleting "flannel-547000" in qemu2 ...
	W1002 03:54:28.314934    4337 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:28.314958    4337 start.go:703] Will try again in 5 seconds ...
	I1002 03:54:33.317122    4337 start.go:365] acquiring machines lock for flannel-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:33.317565    4337 start.go:369] acquired machines lock for "flannel-547000" in 343µs
	I1002 03:54:33.317703    4337 start.go:93] Provisioning new machine with config: &{Name:flannel-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:33.317966    4337 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:33.324425    4337 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:33.371633    4337 start.go:159] libmachine.API.Create for "flannel-547000" (driver="qemu2")
	I1002 03:54:33.371686    4337 client.go:168] LocalClient.Create starting
	I1002 03:54:33.371799    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:33.371863    4337 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:33.371882    4337 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:33.371940    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:33.371977    4337 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:33.371990    4337 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:33.372505    4337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:33.497673    4337 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:33.588931    4337 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:33.588939    4337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:33.589107    4337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2
	I1002 03:54:33.597869    4337 main.go:141] libmachine: STDOUT: 
	I1002 03:54:33.597888    4337 main.go:141] libmachine: STDERR: 
	I1002 03:54:33.597944    4337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2 +20000M
	I1002 03:54:33.605448    4337 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:33.605461    4337 main.go:141] libmachine: STDERR: 
	I1002 03:54:33.605480    4337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2
	I1002 03:54:33.605486    4337 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:33.605528    4337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:eb:af:51:14:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/flannel-547000/disk.qcow2
	I1002 03:54:33.607129    4337 main.go:141] libmachine: STDOUT: 
	I1002 03:54:33.607144    4337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:33.607161    4337 client.go:171] LocalClient.Create took 235.475042ms
	I1002 03:54:35.609289    4337 start.go:128] duration metric: createHost completed in 2.29134475s
	I1002 03:54:35.609406    4337 start.go:83] releasing machines lock for "flannel-547000", held for 2.291823625s
	W1002 03:54:35.609895    4337 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:35.618439    4337 out.go:177] 
	W1002 03:54:35.623436    4337 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:54:35.623479    4337 out.go:239] * 
	* 
	W1002 03:54:35.625962    4337 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:54:35.634382    4337 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.875330084s)

                                                
                                                
-- stdout --
	* [bridge-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-547000 in cluster bridge-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:54:37.973124    4455 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:54:37.973260    4455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:37.973262    4455 out.go:309] Setting ErrFile to fd 2...
	I1002 03:54:37.973265    4455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:37.973380    4455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:54:37.974423    4455 out.go:303] Setting JSON to false
	I1002 03:54:37.990457    4455 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1451,"bootTime":1696242626,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:54:37.990539    4455 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:54:37.994687    4455 out.go:177] * [bridge-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:54:38.006492    4455 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:54:38.002553    4455 notify.go:220] Checking for updates...
	I1002 03:54:38.013486    4455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:54:38.017540    4455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:54:38.020522    4455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:54:38.023536    4455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:54:38.030740    4455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:54:38.033832    4455 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:38.033904    4455 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:38.033943    4455 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:54:38.038526    4455 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:54:38.044456    4455 start.go:298] selected driver: qemu2
	I1002 03:54:38.044462    4455 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:54:38.044476    4455 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:54:38.046846    4455 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:54:38.049477    4455 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:54:38.052620    4455 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:54:38.052645    4455 cni.go:84] Creating CNI manager for "bridge"
	I1002 03:54:38.052649    4455 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:54:38.052654    4455 start_flags.go:321] config:
	{Name:bridge-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:bridge-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1002 03:54:38.057424    4455 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:54:38.064541    4455 out.go:177] * Starting control plane node bridge-547000 in cluster bridge-547000
	I1002 03:54:38.068543    4455 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:54:38.068562    4455 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:54:38.068574    4455 cache.go:57] Caching tarball of preloaded images
	I1002 03:54:38.068629    4455 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:54:38.068635    4455 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:54:38.068719    4455 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/bridge-547000/config.json ...
	I1002 03:54:38.068742    4455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/bridge-547000/config.json: {Name:mk527d9fe294230adeb72f50bce8c56f4ddc27b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:54:38.068954    4455 start.go:365] acquiring machines lock for bridge-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:38.068983    4455 start.go:369] acquired machines lock for "bridge-547000" in 23.792µs
	I1002 03:54:38.068993    4455 start.go:93] Provisioning new machine with config: &{Name:bridge-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:38.069027    4455 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:38.077584    4455 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:38.094222    4455 start.go:159] libmachine.API.Create for "bridge-547000" (driver="qemu2")
	I1002 03:54:38.094249    4455 client.go:168] LocalClient.Create starting
	I1002 03:54:38.094310    4455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:38.094335    4455 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:38.094346    4455 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:38.094383    4455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:38.094402    4455 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:38.094409    4455 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:38.094753    4455 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:38.206568    4455 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:38.335842    4455 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:38.335853    4455 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:38.336019    4455 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2
	I1002 03:54:38.344781    4455 main.go:141] libmachine: STDOUT: 
	I1002 03:54:38.344796    4455 main.go:141] libmachine: STDERR: 
	I1002 03:54:38.344843    4455 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2 +20000M
	I1002 03:54:38.352222    4455 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:38.352248    4455 main.go:141] libmachine: STDERR: 
	I1002 03:54:38.352266    4455 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2
	I1002 03:54:38.352275    4455 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:38.352311    4455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:6b:89:05:8c:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2
	I1002 03:54:38.353994    4455 main.go:141] libmachine: STDOUT: 
	I1002 03:54:38.354007    4455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:38.354025    4455 client.go:171] LocalClient.Create took 259.777292ms
	I1002 03:54:40.356206    4455 start.go:128] duration metric: createHost completed in 2.287193459s
	I1002 03:54:40.356305    4455 start.go:83] releasing machines lock for "bridge-547000", held for 2.287357166s
	W1002 03:54:40.356367    4455 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:40.366715    4455 out.go:177] * Deleting "bridge-547000" in qemu2 ...
	W1002 03:54:40.388593    4455 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:40.388624    4455 start.go:703] Will try again in 5 seconds ...
	I1002 03:54:45.390782    4455 start.go:365] acquiring machines lock for bridge-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:45.391203    4455 start.go:369] acquired machines lock for "bridge-547000" in 338.875µs
	I1002 03:54:45.391353    4455 start.go:93] Provisioning new machine with config: &{Name:bridge-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:45.391631    4455 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:45.403235    4455 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:45.451293    4455 start.go:159] libmachine.API.Create for "bridge-547000" (driver="qemu2")
	I1002 03:54:45.451329    4455 client.go:168] LocalClient.Create starting
	I1002 03:54:45.451449    4455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:45.451505    4455 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:45.451524    4455 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:45.451590    4455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:45.451625    4455 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:45.451637    4455 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:45.452174    4455 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:45.574724    4455 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:45.754998    4455 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:45.755021    4455 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:45.755251    4455 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2
	I1002 03:54:45.764647    4455 main.go:141] libmachine: STDOUT: 
	I1002 03:54:45.764665    4455 main.go:141] libmachine: STDERR: 
	I1002 03:54:45.764732    4455 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2 +20000M
	I1002 03:54:45.772301    4455 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:45.772314    4455 main.go:141] libmachine: STDERR: 
	I1002 03:54:45.772328    4455 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2
	I1002 03:54:45.772519    4455 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:45.773198    4455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:1d:87:f0:bc:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/bridge-547000/disk.qcow2
	I1002 03:54:45.775157    4455 main.go:141] libmachine: STDOUT: 
	I1002 03:54:45.775171    4455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:45.775184    4455 client.go:171] LocalClient.Create took 323.856959ms
	I1002 03:54:47.777338    4455 start.go:128] duration metric: createHost completed in 2.38570075s
	I1002 03:54:47.777412    4455 start.go:83] releasing machines lock for "bridge-547000", held for 2.386234333s
	W1002 03:54:47.777814    4455 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:47.791373    4455 out.go:177] 
	W1002 03:54:47.795479    4455 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:54:47.795505    4455 out.go:239] * 
	* 
	W1002 03:54:47.798157    4455 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:54:47.809373    4455 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-547000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.842527125s)

                                                
                                                
-- stdout --
	* [kubenet-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-547000 in cluster kubenet-547000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:54:49.947912    4572 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:54:49.948089    4572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:49.948096    4572 out.go:309] Setting ErrFile to fd 2...
	I1002 03:54:49.948098    4572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:54:49.948237    4572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:54:49.949485    4572 out.go:303] Setting JSON to false
	I1002 03:54:49.965667    4572 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1463,"bootTime":1696242626,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:54:49.965744    4572 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:54:49.971395    4572 out.go:177] * [kubenet-547000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:54:49.978360    4572 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:54:49.982421    4572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:54:49.978424    4572 notify.go:220] Checking for updates...
	I1002 03:54:49.985386    4572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:54:49.988351    4572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:54:49.991382    4572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:54:49.994270    4572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:54:49.997702    4572 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:49.997774    4572 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:54:49.997822    4572 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:54:50.002351    4572 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:54:50.009400    4572 start.go:298] selected driver: qemu2
	I1002 03:54:50.009408    4572 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:54:50.009414    4572 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:54:50.011681    4572 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:54:50.014478    4572 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:54:50.015898    4572 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:54:50.015916    4572 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1002 03:54:50.015919    4572 start_flags.go:321] config:
	{Name:kubenet-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1002 03:54:50.020482    4572 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:54:50.027408    4572 out.go:177] * Starting control plane node kubenet-547000 in cluster kubenet-547000
	I1002 03:54:50.031320    4572 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:54:50.031345    4572 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:54:50.031356    4572 cache.go:57] Caching tarball of preloaded images
	I1002 03:54:50.031409    4572 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:54:50.031414    4572 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:54:50.031475    4572 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kubenet-547000/config.json ...
	I1002 03:54:50.031485    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/kubenet-547000/config.json: {Name:mk892df118a96a127a5da08aa46ef058d96eefc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:54:50.031692    4572 start.go:365] acquiring machines lock for kubenet-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:50.031721    4572 start.go:369] acquired machines lock for "kubenet-547000" in 23.667µs
	I1002 03:54:50.031731    4572 start.go:93] Provisioning new machine with config: &{Name:kubenet-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:50.031776    4572 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:50.040324    4572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:50.056806    4572 start.go:159] libmachine.API.Create for "kubenet-547000" (driver="qemu2")
	I1002 03:54:50.056836    4572 client.go:168] LocalClient.Create starting
	I1002 03:54:50.056892    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:50.056927    4572 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:50.056937    4572 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:50.056977    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:50.056995    4572 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:50.057005    4572 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:50.057324    4572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:50.217590    4572 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:50.360724    4572 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:50.360730    4572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:50.360913    4572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2
	I1002 03:54:50.370244    4572 main.go:141] libmachine: STDOUT: 
	I1002 03:54:50.370263    4572 main.go:141] libmachine: STDERR: 
	I1002 03:54:50.370315    4572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2 +20000M
	I1002 03:54:50.377800    4572 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:50.377821    4572 main.go:141] libmachine: STDERR: 
	I1002 03:54:50.377835    4572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2
	I1002 03:54:50.377841    4572 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:50.377873    4572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a5:77:34:37:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2
	I1002 03:54:50.379546    4572 main.go:141] libmachine: STDOUT: 
	I1002 03:54:50.379559    4572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:50.379579    4572 client.go:171] LocalClient.Create took 322.744625ms
	I1002 03:54:52.381721    4572 start.go:128] duration metric: createHost completed in 2.349970125s
	I1002 03:54:52.381777    4572 start.go:83] releasing machines lock for "kubenet-547000", held for 2.350095042s
	W1002 03:54:52.381866    4572 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:52.390258    4572 out.go:177] * Deleting "kubenet-547000" in qemu2 ...
	W1002 03:54:52.410877    4572 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:52.410901    4572 start.go:703] Will try again in 5 seconds ...
	I1002 03:54:57.412966    4572 start.go:365] acquiring machines lock for kubenet-547000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:54:57.413498    4572 start.go:369] acquired machines lock for "kubenet-547000" in 387.833µs
	I1002 03:54:57.413635    4572 start.go:93] Provisioning new machine with config: &{Name:kubenet-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-547000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:54:57.413881    4572 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:54:57.423548    4572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 03:54:57.471920    4572 start.go:159] libmachine.API.Create for "kubenet-547000" (driver="qemu2")
	I1002 03:54:57.471965    4572 client.go:168] LocalClient.Create starting
	I1002 03:54:57.472056    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:54:57.472122    4572 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:57.472139    4572 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:57.472208    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:54:57.472243    4572 main.go:141] libmachine: Decoding PEM data...
	I1002 03:54:57.472263    4572 main.go:141] libmachine: Parsing certificate...
	I1002 03:54:57.472747    4572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:54:57.597581    4572 main.go:141] libmachine: Creating SSH key...
	I1002 03:54:57.699861    4572 main.go:141] libmachine: Creating Disk image...
	I1002 03:54:57.699868    4572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:54:57.700038    4572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2
	I1002 03:54:57.708922    4572 main.go:141] libmachine: STDOUT: 
	I1002 03:54:57.708937    4572 main.go:141] libmachine: STDERR: 
	I1002 03:54:57.709003    4572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2 +20000M
	I1002 03:54:57.716534    4572 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:54:57.716547    4572 main.go:141] libmachine: STDERR: 
	I1002 03:54:57.716562    4572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2
	I1002 03:54:57.716573    4572 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:54:57.716614    4572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7f:68:6b:8d:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/kubenet-547000/disk.qcow2
	I1002 03:54:57.718198    4572 main.go:141] libmachine: STDOUT: 
	I1002 03:54:57.718210    4572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:54:57.718222    4572 client.go:171] LocalClient.Create took 246.25775ms
	I1002 03:54:59.720394    4572 start.go:128] duration metric: createHost completed in 2.306524084s
	I1002 03:54:59.720489    4572 start.go:83] releasing machines lock for "kubenet-547000", held for 2.30700825s
	W1002 03:54:59.720959    4572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:54:59.733616    4572 out.go:177] 
	W1002 03:54:59.737696    4572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:54:59.737725    4572 out.go:239] * 
	* 
	W1002 03:54:59.740138    4572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:54:59.750646    4572 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-805000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
E1002 03:55:03.383987    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-805000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.737656833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-805000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-805000 in cluster old-k8s-version-805000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-805000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:01.885909    4686 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:01.886063    4686 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:01.886066    4686 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:01.886069    4686 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:01.886185    4686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:01.887205    4686 out.go:303] Setting JSON to false
	I1002 03:55:01.903225    4686 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1475,"bootTime":1696242626,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:01.903303    4686 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:01.908922    4686 out.go:177] * [old-k8s-version-805000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:01.914954    4686 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:01.919026    4686 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:01.915018    4686 notify.go:220] Checking for updates...
	I1002 03:55:01.924943    4686 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:01.927930    4686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:01.930905    4686 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:01.933940    4686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:01.937220    4686 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:01.937284    4686 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:01.937325    4686 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:01.941939    4686 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:55:01.947849    4686 start.go:298] selected driver: qemu2
	I1002 03:55:01.947856    4686 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:55:01.947861    4686 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:01.950127    4686 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:55:01.952923    4686 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:55:01.955966    4686 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:01.955983    4686 cni.go:84] Creating CNI manager for ""
	I1002 03:55:01.955989    4686 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:55:01.955992    4686 start_flags.go:321] config:
	{Name:old-k8s-version-805000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-805000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:01.960433    4686 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:01.967931    4686 out.go:177] * Starting control plane node old-k8s-version-805000 in cluster old-k8s-version-805000
	I1002 03:55:01.971974    4686 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:55:01.971991    4686 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:55:01.972006    4686 cache.go:57] Caching tarball of preloaded images
	I1002 03:55:01.972064    4686 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:55:01.972070    4686 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1002 03:55:01.972134    4686 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/old-k8s-version-805000/config.json ...
	I1002 03:55:01.972146    4686 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/old-k8s-version-805000/config.json: {Name:mk7c4f096e796b8269d8ff07c94aec14fed29b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:55:01.972340    4686 start.go:365] acquiring machines lock for old-k8s-version-805000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:01.972370    4686 start.go:369] acquired machines lock for "old-k8s-version-805000" in 23.166µs
	I1002 03:55:01.972380    4686 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-805000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-805000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:01.972407    4686 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:01.980937    4686 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:01.996877    4686 start.go:159] libmachine.API.Create for "old-k8s-version-805000" (driver="qemu2")
	I1002 03:55:01.996900    4686 client.go:168] LocalClient.Create starting
	I1002 03:55:01.996956    4686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:01.996983    4686 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:01.996992    4686 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:01.997027    4686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:01.997045    4686 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:01.997052    4686 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:01.997389    4686 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:02.109326    4686 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:02.157277    4686 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:02.157282    4686 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:02.157459    4686 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:02.166258    4686 main.go:141] libmachine: STDOUT: 
	I1002 03:55:02.166273    4686 main.go:141] libmachine: STDERR: 
	I1002 03:55:02.166321    4686 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2 +20000M
	I1002 03:55:02.173830    4686 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:02.173851    4686 main.go:141] libmachine: STDERR: 
	I1002 03:55:02.173866    4686 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:02.173875    4686 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:02.173905    4686 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:75:63:ff:01:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:02.175588    4686 main.go:141] libmachine: STDOUT: 
	I1002 03:55:02.175603    4686 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:02.175622    4686 client.go:171] LocalClient.Create took 178.719ms
	I1002 03:55:04.177898    4686 start.go:128] duration metric: createHost completed in 2.2055095s
	I1002 03:55:04.177984    4686 start.go:83] releasing machines lock for "old-k8s-version-805000", held for 2.205650333s
	W1002 03:55:04.178026    4686 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:04.191335    4686 out.go:177] * Deleting "old-k8s-version-805000" in qemu2 ...
	W1002 03:55:04.211691    4686 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:04.211722    4686 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:09.213900    4686 start.go:365] acquiring machines lock for old-k8s-version-805000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:09.214276    4686 start.go:369] acquired machines lock for "old-k8s-version-805000" in 290.041µs
	I1002 03:55:09.214415    4686 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-805000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-805000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:09.214677    4686 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:09.224302    4686 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:09.272033    4686 start.go:159] libmachine.API.Create for "old-k8s-version-805000" (driver="qemu2")
	I1002 03:55:09.272084    4686 client.go:168] LocalClient.Create starting
	I1002 03:55:09.272204    4686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:09.272268    4686 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:09.272285    4686 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:09.272345    4686 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:09.272378    4686 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:09.272392    4686 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:09.273563    4686 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:09.397975    4686 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:09.535363    4686 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:09.535370    4686 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:09.535551    4686 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:09.544706    4686 main.go:141] libmachine: STDOUT: 
	I1002 03:55:09.544724    4686 main.go:141] libmachine: STDERR: 
	I1002 03:55:09.544779    4686 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2 +20000M
	I1002 03:55:09.552271    4686 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:09.552284    4686 main.go:141] libmachine: STDERR: 
	I1002 03:55:09.552304    4686 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:09.552314    4686 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:09.552355    4686 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ac:d0:fa:b6:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:09.553992    4686 main.go:141] libmachine: STDOUT: 
	I1002 03:55:09.554002    4686 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:09.554017    4686 client.go:171] LocalClient.Create took 281.9335ms
	I1002 03:55:11.556145    4686 start.go:128] duration metric: createHost completed in 2.341488125s
	I1002 03:55:11.556212    4686 start.go:83] releasing machines lock for "old-k8s-version-805000", held for 2.341962041s
	W1002 03:55:11.556614    4686 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-805000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-805000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:11.565311    4686 out.go:177] 
	W1002 03:55:11.570416    4686 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:11.570475    4686 out.go:239] * 
	* 
	W1002 03:55:11.572979    4686 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:55:11.584275    4686 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-805000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (67.578125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-805000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-805000 create -f testdata/busybox.yaml: exit status 1 (29.625083ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-805000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (28.633666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (28.467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-805000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-805000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-805000 describe deploy/metrics-server -n kube-system: exit status 1 (25.885292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-805000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-805000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (28.657917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-805000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-805000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.193516209s)

                                                
                                                
-- stdout --
	* [old-k8s-version-805000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-805000 in cluster old-k8s-version-805000
	* Restarting existing qemu2 VM for "old-k8s-version-805000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-805000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:12.038277    4722 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:12.038425    4722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:12.038429    4722 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:12.038431    4722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:12.038554    4722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:12.039485    4722 out.go:303] Setting JSON to false
	I1002 03:55:12.055409    4722 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1486,"bootTime":1696242626,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:12.055503    4722 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:12.060408    4722 out.go:177] * [old-k8s-version-805000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:12.067481    4722 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:12.075453    4722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:12.067536    4722 notify.go:220] Checking for updates...
	I1002 03:55:12.081411    4722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:12.084405    4722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:12.087415    4722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:12.090487    4722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:12.093758    4722 config.go:182] Loaded profile config "old-k8s-version-805000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 03:55:12.097371    4722 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 03:55:12.100464    4722 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:12.104352    4722 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:55:12.111419    4722 start.go:298] selected driver: qemu2
	I1002 03:55:12.111425    4722 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-805000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-805000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:12.111479    4722 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:12.113875    4722 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:12.113903    4722 cni.go:84] Creating CNI manager for ""
	I1002 03:55:12.113910    4722 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:55:12.113916    4722 start_flags.go:321] config:
	{Name:old-k8s-version-805000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-805000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:12.118499    4722 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:12.125414    4722 out.go:177] * Starting control plane node old-k8s-version-805000 in cluster old-k8s-version-805000
	I1002 03:55:12.128373    4722 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:55:12.128387    4722 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:55:12.128399    4722 cache.go:57] Caching tarball of preloaded images
	I1002 03:55:12.128464    4722 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:55:12.128471    4722 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1002 03:55:12.128527    4722 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/old-k8s-version-805000/config.json ...
	I1002 03:55:12.128849    4722 start.go:365] acquiring machines lock for old-k8s-version-805000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:12.128875    4722 start.go:369] acquired machines lock for "old-k8s-version-805000" in 20.292µs
	I1002 03:55:12.128883    4722 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:55:12.128887    4722 fix.go:54] fixHost starting: 
	I1002 03:55:12.129004    4722 fix.go:102] recreateIfNeeded on old-k8s-version-805000: state=Stopped err=<nil>
	W1002 03:55:12.129012    4722 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:55:12.132465    4722 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-805000" ...
	I1002 03:55:12.140444    4722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ac:d0:fa:b6:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:12.142580    4722 main.go:141] libmachine: STDOUT: 
	I1002 03:55:12.142602    4722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:12.142632    4722 fix.go:56] fixHost completed within 13.744ms
	I1002 03:55:12.142638    4722 start.go:83] releasing machines lock for "old-k8s-version-805000", held for 13.75875ms
	W1002 03:55:12.142643    4722 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:12.142681    4722 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:12.142686    4722 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:17.144773    4722 start.go:365] acquiring machines lock for old-k8s-version-805000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:17.145135    4722 start.go:369] acquired machines lock for "old-k8s-version-805000" in 240.958µs
	I1002 03:55:17.145249    4722 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:55:17.145266    4722 fix.go:54] fixHost starting: 
	I1002 03:55:17.145902    4722 fix.go:102] recreateIfNeeded on old-k8s-version-805000: state=Stopped err=<nil>
	W1002 03:55:17.145927    4722 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:55:17.155243    4722 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-805000" ...
	I1002 03:55:17.159458    4722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ac:d0:fa:b6:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/old-k8s-version-805000/disk.qcow2
	I1002 03:55:17.168167    4722 main.go:141] libmachine: STDOUT: 
	I1002 03:55:17.168215    4722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:17.168265    4722 fix.go:56] fixHost completed within 23.000417ms
	I1002 03:55:17.168282    4722 start.go:83] releasing machines lock for "old-k8s-version-805000", held for 23.129375ms
	W1002 03:55:17.168404    4722 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-805000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-805000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:17.177054    4722 out.go:177] 
	W1002 03:55:17.181293    4722 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:17.181316    4722 out.go:239] * 
	* 
	W1002 03:55:17.183998    4722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:55:17.193291    4722 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-805000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (69.678916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-805000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (32.428125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-805000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-805000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-805000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.693041ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-805000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-805000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (29.614916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-805000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-805000 "sudo crictl images -o json": exit status 89 (43.334375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-805000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-805000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-805000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (28.408375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-805000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-805000 --alsologtostderr -v=1: exit status 89 (42.127708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-805000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:17.463890    4741 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:17.464290    4741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:17.464295    4741 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:17.464297    4741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:17.464476    4741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:17.464660    4741 out.go:303] Setting JSON to false
	I1002 03:55:17.464670    4741 mustload.go:65] Loading cluster: old-k8s-version-805000
	I1002 03:55:17.464861    4741 config.go:182] Loaded profile config "old-k8s-version-805000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 03:55:17.467771    4741 out.go:177] * The control plane node must be running for this command
	I1002 03:55:17.474603    4741 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-805000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-805000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (28.204333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (27.557084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-805000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.803146375s)

                                                
                                                
-- stdout --
	* [no-preload-049000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-049000 in cluster no-preload-049000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-049000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:17.928827    4764 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:17.928969    4764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:17.928972    4764 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:17.928975    4764 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:17.929108    4764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:17.930114    4764 out.go:303] Setting JSON to false
	I1002 03:55:17.946134    4764 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1491,"bootTime":1696242626,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:17.946230    4764 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:17.951303    4764 out.go:177] * [no-preload-049000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:17.958246    4764 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:17.962244    4764 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:17.958310    4764 notify.go:220] Checking for updates...
	I1002 03:55:17.968208    4764 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:17.971244    4764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:17.974142    4764 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:17.977214    4764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:17.980612    4764 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:17.980669    4764 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:17.980714    4764 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:17.984155    4764 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:55:17.991211    4764 start.go:298] selected driver: qemu2
	I1002 03:55:17.991220    4764 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:55:17.991227    4764 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:17.993742    4764 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:55:17.996090    4764 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:55:17.999232    4764 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:17.999252    4764 cni.go:84] Creating CNI manager for ""
	I1002 03:55:17.999260    4764 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:55:17.999264    4764 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:55:17.999271    4764 start_flags.go:321] config:
	{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:18.003706    4764 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.011170    4764 out.go:177] * Starting control plane node no-preload-049000 in cluster no-preload-049000
	I1002 03:55:18.015195    4764 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:55:18.015276    4764 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/no-preload-049000/config.json ...
	I1002 03:55:18.015295    4764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/no-preload-049000/config.json: {Name:mk4c5ca15611c96884515b488498c7ba801557fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:55:18.015302    4764 cache.go:107] acquiring lock: {Name:mk867365eaf0827cccc8ed385092ff673bf41ac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015346    4764 cache.go:107] acquiring lock: {Name:mka1ad7981e999912697a663e7e9ad2b4f760ac8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015304    4764 cache.go:107] acquiring lock: {Name:mkfb901c7f38d77f6d3178f9e744fed622697e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015484    4764 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1002 03:55:18.015492    4764 cache.go:107] acquiring lock: {Name:mk70bf57837d42199d7d75ded255abbeaf52a090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015492    4764 cache.go:107] acquiring lock: {Name:mk4d64febbf0d174ca1b2e5bdb6863e82dd705f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015535    4764 cache.go:107] acquiring lock: {Name:mkf7727967f46c874494c7a0fb8e7ad625940bb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015553    4764 start.go:365] acquiring machines lock for no-preload-049000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:18.015566    4764 cache.go:107] acquiring lock: {Name:mkd86c6c18e169d05c19816f79cecbd8f0898571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015580    4764 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 03:55:18.015576    4764 cache.go:107] acquiring lock: {Name:mk1a637918d297798baec5f929f9e1b628ec74a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:18.015608    4764 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 03:55:18.015624    4764 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 326.792µs
	I1002 03:55:18.015636    4764 start.go:369] acquired machines lock for "no-preload-049000" in 72.875µs
	I1002 03:55:18.015639    4764 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 03:55:18.015646    4764 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 03:55:18.015694    4764 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 03:55:18.015710    4764 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 03:55:18.015730    4764 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 03:55:18.015679    4764 start.go:93] Provisioning new machine with config: &{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:18.015802    4764 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:18.015814    4764 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 03:55:18.024208    4764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:18.028872    4764 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 03:55:18.028905    4764 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 03:55:18.028967    4764 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1002 03:55:18.029027    4764 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 03:55:18.029442    4764 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 03:55:18.029444    4764 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 03:55:18.031787    4764 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 03:55:18.041179    4764 start.go:159] libmachine.API.Create for "no-preload-049000" (driver="qemu2")
	I1002 03:55:18.041198    4764 client.go:168] LocalClient.Create starting
	I1002 03:55:18.041258    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:18.041294    4764 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:18.041306    4764 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:18.041345    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:18.041364    4764 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:18.041372    4764 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:18.041788    4764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:18.162083    4764 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:18.266545    4764 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:18.266557    4764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:18.266772    4764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:18.276528    4764 main.go:141] libmachine: STDOUT: 
	I1002 03:55:18.276547    4764 main.go:141] libmachine: STDERR: 
	I1002 03:55:18.276604    4764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2 +20000M
	I1002 03:55:18.284835    4764 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:18.284847    4764 main.go:141] libmachine: STDERR: 
	I1002 03:55:18.284864    4764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:18.284874    4764 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:18.284917    4764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:9d:28:66:2e:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:18.286734    4764 main.go:141] libmachine: STDOUT: 
	I1002 03:55:18.286749    4764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:18.286767    4764 client.go:171] LocalClient.Create took 245.570417ms
	I1002 03:55:18.671857    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2
	I1002 03:55:18.718971    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1002 03:55:18.875040    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I1002 03:55:18.978501    4764 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1002 03:55:18.978519    4764 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 963.2025ms
	I1002 03:55:18.978526    4764 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1002 03:55:19.109494    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2
	I1002 03:55:19.339696    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I1002 03:55:19.525204    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I1002 03:55:19.753841    4764 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2
	I1002 03:55:20.287129    4764 start.go:128] duration metric: createHost completed in 2.271348958s
	I1002 03:55:20.287181    4764 start.go:83] releasing machines lock for "no-preload-049000", held for 2.271583166s
	W1002 03:55:20.287240    4764 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:20.298056    4764 out.go:177] * Deleting "no-preload-049000" in qemu2 ...
	W1002 03:55:20.318367    4764 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:20.318408    4764 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:21.866345    4764 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I1002 03:55:21.866404    4764 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 3.851027875s
	I1002 03:55:21.866434    4764 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I1002 03:55:22.722416    4764 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1002 03:55:22.722460    4764 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 4.706993167s
	I1002 03:55:22.722513    4764 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1002 03:55:22.864586    4764 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I1002 03:55:22.864647    4764 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 4.84919325s
	I1002 03:55:22.864679    4764 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I1002 03:55:23.163940    4764 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I1002 03:55:23.163990    4764 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 5.14875975s
	I1002 03:55:23.164040    4764 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I1002 03:55:23.666222    4764 cache.go:157] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I1002 03:55:23.666276    4764 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 5.651102292s
	I1002 03:55:23.666301    4764 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I1002 03:55:25.318680    4764 start.go:365] acquiring machines lock for no-preload-049000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:25.319090    4764 start.go:369] acquired machines lock for "no-preload-049000" in 340.041µs
	I1002 03:55:25.319197    4764 start.go:93] Provisioning new machine with config: &{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:25.319450    4764 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:25.327244    4764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:25.375664    4764 start.go:159] libmachine.API.Create for "no-preload-049000" (driver="qemu2")
	I1002 03:55:25.375700    4764 client.go:168] LocalClient.Create starting
	I1002 03:55:25.375876    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:25.375961    4764 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:25.375985    4764 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:25.376052    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:25.376090    4764 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:25.376106    4764 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:25.376592    4764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:25.499497    4764 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:25.643087    4764 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:25.643101    4764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:25.643269    4764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:25.652570    4764 main.go:141] libmachine: STDOUT: 
	I1002 03:55:25.652583    4764 main.go:141] libmachine: STDERR: 
	I1002 03:55:25.652637    4764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2 +20000M
	I1002 03:55:25.660368    4764 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:25.660380    4764 main.go:141] libmachine: STDERR: 
	I1002 03:55:25.660391    4764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:25.660398    4764 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:25.660444    4764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b4:5b:91:82:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:25.662270    4764 main.go:141] libmachine: STDOUT: 
	I1002 03:55:25.662290    4764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:25.662302    4764 client.go:171] LocalClient.Create took 286.604375ms
	I1002 03:55:27.662533    4764 start.go:128] duration metric: createHost completed in 2.3430965s
	I1002 03:55:27.662609    4764 start.go:83] releasing machines lock for "no-preload-049000", held for 2.343545416s
	W1002 03:55:27.662873    4764 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:27.674303    4764 out.go:177] 
	W1002 03:55:27.678585    4764 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:27.678666    4764 out.go:239] * 
	* 
	W1002 03:55:27.681282    4764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:55:27.690431    4764 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (60.642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-049000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-049000 create -f testdata/busybox.yaml: exit status 1 (29.367875ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-049000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (29.256041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.104625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-049000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-049000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-049000 describe deploy/metrics-server -n kube-system: exit status 1 (26.103416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-049000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-049000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.570459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.183087208s)

                                                
                                                
-- stdout --
	* [no-preload-049000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-049000 in cluster no-preload-049000
	* Restarting existing qemu2 VM for "no-preload-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:28.146915    4896 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:28.147049    4896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:28.147052    4896 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:28.147055    4896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:28.147180    4896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:28.148118    4896 out.go:303] Setting JSON to false
	I1002 03:55:28.163984    4896 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1502,"bootTime":1696242626,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:28.164070    4896 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:28.168726    4896 out.go:177] * [no-preload-049000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:28.174631    4896 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:28.177704    4896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:28.174693    4896 notify.go:220] Checking for updates...
	I1002 03:55:28.183583    4896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:28.186695    4896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:28.189626    4896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:28.192626    4896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:28.195948    4896 config.go:182] Loaded profile config "no-preload-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:28.196218    4896 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:28.200550    4896 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:55:28.207647    4896 start.go:298] selected driver: qemu2
	I1002 03:55:28.207655    4896 start.go:902] validating driver "qemu2" against &{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:28.207716    4896 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:28.209997    4896 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:28.210021    4896 cni.go:84] Creating CNI manager for ""
	I1002 03:55:28.210029    4896 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:55:28.210036    4896 start_flags.go:321] config:
	{Name:no-preload-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-049000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:28.214219    4896 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.221556    4896 out.go:177] * Starting control plane node no-preload-049000 in cluster no-preload-049000
	I1002 03:55:28.225648    4896 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:55:28.225717    4896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/no-preload-049000/config.json ...
	I1002 03:55:28.225757    4896 cache.go:107] acquiring lock: {Name:mkfb901c7f38d77f6d3178f9e744fed622697e3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225767    4896 cache.go:107] acquiring lock: {Name:mk867365eaf0827cccc8ed385092ff673bf41ac0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225782    4896 cache.go:107] acquiring lock: {Name:mk70bf57837d42199d7d75ded255abbeaf52a090 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225816    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 03:55:28.225822    4896 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.666µs
	I1002 03:55:28.225828    4896 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 03:55:28.225827    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I1002 03:55:28.225836    4896 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 84.166µs
	I1002 03:55:28.225844    4896 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I1002 03:55:28.225845    4896 cache.go:107] acquiring lock: {Name:mk1a637918d297798baec5f929f9e1b628ec74a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225852    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I1002 03:55:28.225851    4896 cache.go:107] acquiring lock: {Name:mkd86c6c18e169d05c19816f79cecbd8f0898571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225858    4896 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 104.666µs
	I1002 03:55:28.225864    4896 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I1002 03:55:28.225871    4896 cache.go:107] acquiring lock: {Name:mk4d64febbf0d174ca1b2e5bdb6863e82dd705f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225885    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1002 03:55:28.225877    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I1002 03:55:28.225909    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I1002 03:55:28.225918    4896 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 47.833µs
	I1002 03:55:28.225922    4896 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I1002 03:55:28.225881    4896 cache.go:107] acquiring lock: {Name:mka1ad7981e999912697a663e7e9ad2b4f760ac8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225889    4896 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 38.875µs
	I1002 03:55:28.225947    4896 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1002 03:55:28.225957    4896 cache.go:107] acquiring lock: {Name:mkf7727967f46c874494c7a0fb8e7ad625940bb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:28.225952    4896 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 80.958µs
	I1002 03:55:28.225983    4896 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I1002 03:55:28.226016    4896 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 03:55:28.226026    4896 cache.go:115] /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1002 03:55:28.226030    4896 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 148.875µs
	I1002 03:55:28.226038    4896 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1002 03:55:28.226056    4896 start.go:365] acquiring machines lock for no-preload-049000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:28.226091    4896 start.go:369] acquired machines lock for "no-preload-049000" in 29.166µs
	I1002 03:55:28.226099    4896 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:55:28.226105    4896 fix.go:54] fixHost starting: 
	I1002 03:55:28.226221    4896 fix.go:102] recreateIfNeeded on no-preload-049000: state=Stopped err=<nil>
	W1002 03:55:28.226228    4896 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:55:28.233590    4896 out.go:177] * Restarting existing qemu2 VM for "no-preload-049000" ...
	I1002 03:55:28.236657    4896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b4:5b:91:82:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:28.237381    4896 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 03:55:28.238760    4896 main.go:141] libmachine: STDOUT: 
	I1002 03:55:28.238772    4896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:28.238801    4896 fix.go:56] fixHost completed within 12.69725ms
	I1002 03:55:28.238805    4896 start.go:83] releasing machines lock for "no-preload-049000", held for 12.710542ms
	W1002 03:55:28.238811    4896 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:28.238843    4896 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:28.238848    4896 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:28.809420    4896 cache.go:162] opening:  /Users/jenkins/minikube-integration/17340-994/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I1002 03:55:33.239501    4896 start.go:365] acquiring machines lock for no-preload-049000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:33.239881    4896 start.go:369] acquired machines lock for "no-preload-049000" in 263.875µs
	I1002 03:55:33.240015    4896 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:55:33.240047    4896 fix.go:54] fixHost starting: 
	I1002 03:55:33.240669    4896 fix.go:102] recreateIfNeeded on no-preload-049000: state=Stopped err=<nil>
	W1002 03:55:33.240696    4896 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:55:33.245406    4896 out.go:177] * Restarting existing qemu2 VM for "no-preload-049000" ...
	I1002 03:55:33.254369    4896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b4:5b:91:82:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/no-preload-049000/disk.qcow2
	I1002 03:55:33.265271    4896 main.go:141] libmachine: STDOUT: 
	I1002 03:55:33.265343    4896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:33.265468    4896 fix.go:56] fixHost completed within 25.432417ms
	I1002 03:55:33.265488    4896 start.go:83] releasing machines lock for "no-preload-049000", held for 25.583917ms
	W1002 03:55:33.265761    4896 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:33.273125    4896 out.go:177] 
	W1002 03:55:33.277321    4896 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:33.277352    4896 out.go:239] * 
	* 
	W1002 03:55:33.279710    4896 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:55:33.290261    4896 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-049000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (65.456208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-049000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (32.465416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-049000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-049000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-049000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.743417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-049000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-049000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.716042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-049000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-049000 "sudo crictl images -o json": exit status 89 (39.329167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-049000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-049000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-049000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.318292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-049000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-049000 --alsologtostderr -v=1: exit status 89 (40.253583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-049000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:33.553475    4929 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:33.553668    4929 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:33.553671    4929 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:33.553674    4929 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:33.553805    4929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:33.554090    4929 out.go:303] Setting JSON to false
	I1002 03:55:33.554099    4929 mustload.go:65] Loading cluster: no-preload-049000
	I1002 03:55:33.554308    4929 config.go:182] Loaded profile config "no-preload-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:33.558515    4929 out.go:177] * The control plane node must be running for this command
	I1002 03:55:33.562696    4929 out.go:177]   To start a cluster, run: "minikube start -p no-preload-049000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-049000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.956458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (28.372417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-344000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-344000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.71461825s)

                                                
                                                
-- stdout --
	* [embed-certs-344000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-344000 in cluster embed-certs-344000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:34.020136    4952 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:34.020290    4952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:34.020293    4952 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:34.020296    4952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:34.020425    4952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:34.021446    4952 out.go:303] Setting JSON to false
	I1002 03:55:34.037649    4952 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1508,"bootTime":1696242626,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:34.037749    4952 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:34.042607    4952 out.go:177] * [embed-certs-344000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:34.049530    4952 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:34.049579    4952 notify.go:220] Checking for updates...
	I1002 03:55:34.052574    4952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:34.053737    4952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:34.056519    4952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:34.059539    4952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:34.062524    4952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:34.065889    4952 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:34.065945    4952 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:34.065992    4952 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:34.070463    4952 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:55:34.077476    4952 start.go:298] selected driver: qemu2
	I1002 03:55:34.077482    4952 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:55:34.077487    4952 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:34.079778    4952 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:55:34.082514    4952 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:55:34.090574    4952 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:34.090597    4952 cni.go:84] Creating CNI manager for ""
	I1002 03:55:34.090605    4952 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:55:34.090611    4952 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:55:34.090617    4952 start_flags.go:321] config:
	{Name:embed-certs-344000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-344000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:34.095133    4952 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:34.102550    4952 out.go:177] * Starting control plane node embed-certs-344000 in cluster embed-certs-344000
	I1002 03:55:34.106618    4952 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:55:34.106633    4952 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:55:34.106648    4952 cache.go:57] Caching tarball of preloaded images
	I1002 03:55:34.106706    4952 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:55:34.106712    4952 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:55:34.106785    4952 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/embed-certs-344000/config.json ...
	I1002 03:55:34.106805    4952 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/embed-certs-344000/config.json: {Name:mk163520dec3cbf2f504ea63cbef35bd769aac5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:55:34.107005    4952 start.go:365] acquiring machines lock for embed-certs-344000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:34.107034    4952 start.go:369] acquired machines lock for "embed-certs-344000" in 23.917µs
	I1002 03:55:34.107044    4952 start.go:93] Provisioning new machine with config: &{Name:embed-certs-344000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-344000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:34.107073    4952 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:34.115495    4952 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:34.132193    4952 start.go:159] libmachine.API.Create for "embed-certs-344000" (driver="qemu2")
	I1002 03:55:34.132216    4952 client.go:168] LocalClient.Create starting
	I1002 03:55:34.132273    4952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:34.132301    4952 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:34.132313    4952 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:34.132348    4952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:34.132366    4952 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:34.132374    4952 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:34.132762    4952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:34.244694    4952 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:34.279812    4952 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:34.279817    4952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:34.279979    4952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:34.288769    4952 main.go:141] libmachine: STDOUT: 
	I1002 03:55:34.288781    4952 main.go:141] libmachine: STDERR: 
	I1002 03:55:34.288840    4952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2 +20000M
	I1002 03:55:34.296333    4952 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:34.296345    4952 main.go:141] libmachine: STDERR: 
	I1002 03:55:34.296364    4952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:34.296371    4952 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:34.296402    4952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:65:44:89:99:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:34.297993    4952 main.go:141] libmachine: STDOUT: 
	I1002 03:55:34.298005    4952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:34.298024    4952 client.go:171] LocalClient.Create took 165.806333ms
	I1002 03:55:36.300156    4952 start.go:128] duration metric: createHost completed in 2.193107542s
	I1002 03:55:36.300227    4952 start.go:83] releasing machines lock for "embed-certs-344000", held for 2.193229667s
	W1002 03:55:36.300275    4952 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:36.306308    4952 out.go:177] * Deleting "embed-certs-344000" in qemu2 ...
	W1002 03:55:36.328389    4952 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:36.328442    4952 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:41.330622    4952 start.go:365] acquiring machines lock for embed-certs-344000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:41.331018    4952 start.go:369] acquired machines lock for "embed-certs-344000" in 276.25µs
	I1002 03:55:41.331147    4952 start.go:93] Provisioning new machine with config: &{Name:embed-certs-344000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-344000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:41.331434    4952 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:41.337129    4952 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:41.381590    4952 start.go:159] libmachine.API.Create for "embed-certs-344000" (driver="qemu2")
	I1002 03:55:41.381644    4952 client.go:168] LocalClient.Create starting
	I1002 03:55:41.381776    4952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:41.381840    4952 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:41.381854    4952 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:41.381936    4952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:41.382037    4952 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:41.382057    4952 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:41.382552    4952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:41.507885    4952 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:41.646419    4952 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:41.646429    4952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:41.646615    4952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:41.656177    4952 main.go:141] libmachine: STDOUT: 
	I1002 03:55:41.656192    4952 main.go:141] libmachine: STDERR: 
	I1002 03:55:41.656246    4952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2 +20000M
	I1002 03:55:41.663787    4952 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:41.663801    4952 main.go:141] libmachine: STDERR: 
	I1002 03:55:41.663814    4952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:41.663823    4952 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:41.663866    4952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bb:57:de:de:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:41.665478    4952 main.go:141] libmachine: STDOUT: 
	I1002 03:55:41.665491    4952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:41.665504    4952 client.go:171] LocalClient.Create took 283.861208ms
	I1002 03:55:43.667636    4952 start.go:128] duration metric: createHost completed in 2.336221541s
	I1002 03:55:43.667712    4952 start.go:83] releasing machines lock for "embed-certs-344000", held for 2.33671925s
	W1002 03:55:43.668091    4952 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:43.678855    4952 out.go:177] 
	W1002 03:55:43.682880    4952 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:43.682930    4952 out.go:239] * 
	* 
	W1002 03:55:43.685743    4952 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:55:43.694781    4952 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-344000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (64.928583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-344000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-344000 create -f testdata/busybox.yaml: exit status 1 (29.217333ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-344000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.586625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.405167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-344000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-344000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-344000 describe deploy/metrics-server -n kube-system: exit status 1 (25.844792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-344000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-344000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.814792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-344000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-344000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.174497291s)

                                                
                                                
-- stdout --
	* [embed-certs-344000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-344000 in cluster embed-certs-344000
	* Restarting existing qemu2 VM for "embed-certs-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-344000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:44.149675    4988 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:44.149823    4988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:44.149825    4988 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:44.149828    4988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:44.149977    4988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:44.150943    4988 out.go:303] Setting JSON to false
	I1002 03:55:44.167058    4988 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1518,"bootTime":1696242626,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:44.167158    4988 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:44.170995    4988 out.go:177] * [embed-certs-344000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:44.176957    4988 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:44.180966    4988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:44.177028    4988 notify.go:220] Checking for updates...
	I1002 03:55:44.184022    4988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:44.186927    4988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:44.189938    4988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:44.192969    4988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:44.194766    4988 config.go:182] Loaded profile config "embed-certs-344000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:44.195013    4988 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:44.198938    4988 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:55:44.205780    4988 start.go:298] selected driver: qemu2
	I1002 03:55:44.205787    4988 start.go:902] validating driver "qemu2" against &{Name:embed-certs-344000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-344000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:44.205842    4988 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:44.208119    4988 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:44.208144    4988 cni.go:84] Creating CNI manager for ""
	I1002 03:55:44.208151    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:55:44.208158    4988 start_flags.go:321] config:
	{Name:embed-certs-344000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-344000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:44.212422    4988 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:44.219995    4988 out.go:177] * Starting control plane node embed-certs-344000 in cluster embed-certs-344000
	I1002 03:55:44.223929    4988 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:55:44.223950    4988 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:55:44.223962    4988 cache.go:57] Caching tarball of preloaded images
	I1002 03:55:44.224012    4988 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:55:44.224017    4988 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:55:44.224069    4988 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/embed-certs-344000/config.json ...
	I1002 03:55:44.224503    4988 start.go:365] acquiring machines lock for embed-certs-344000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:44.224528    4988 start.go:369] acquired machines lock for "embed-certs-344000" in 19.291µs
	I1002 03:55:44.224536    4988 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:55:44.224541    4988 fix.go:54] fixHost starting: 
	I1002 03:55:44.224650    4988 fix.go:102] recreateIfNeeded on embed-certs-344000: state=Stopped err=<nil>
	W1002 03:55:44.224672    4988 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:55:44.232865    4988 out.go:177] * Restarting existing qemu2 VM for "embed-certs-344000" ...
	I1002 03:55:44.236968    4988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bb:57:de:de:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:44.238953    4988 main.go:141] libmachine: STDOUT: 
	I1002 03:55:44.238972    4988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:44.238998    4988 fix.go:56] fixHost completed within 14.457167ms
	I1002 03:55:44.239004    4988 start.go:83] releasing machines lock for "embed-certs-344000", held for 14.471584ms
	W1002 03:55:44.239009    4988 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:44.239039    4988 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:44.239044    4988 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:49.241239    4988 start.go:365] acquiring machines lock for embed-certs-344000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:49.241650    4988 start.go:369] acquired machines lock for "embed-certs-344000" in 308.167µs
	I1002 03:55:49.241822    4988 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:55:49.241845    4988 fix.go:54] fixHost starting: 
	I1002 03:55:49.242555    4988 fix.go:102] recreateIfNeeded on embed-certs-344000: state=Stopped err=<nil>
	W1002 03:55:49.242584    4988 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:55:49.251083    4988 out.go:177] * Restarting existing qemu2 VM for "embed-certs-344000" ...
	I1002 03:55:49.256414    4988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bb:57:de:de:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/embed-certs-344000/disk.qcow2
	I1002 03:55:49.266389    4988 main.go:141] libmachine: STDOUT: 
	I1002 03:55:49.266460    4988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:49.266526    4988 fix.go:56] fixHost completed within 24.687792ms
	I1002 03:55:49.266545    4988 start.go:83] releasing machines lock for "embed-certs-344000", held for 24.87675ms
	W1002 03:55:49.266740    4988 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-344000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-344000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:49.272409    4988 out.go:177] 
	W1002 03:55:49.276107    4988 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:55:49.276138    4988 out.go:239] * 
	* 
	W1002 03:55:49.279348    4988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:55:49.286108    4988 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-344000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (65.844333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-344000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (31.600334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-344000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-344000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-344000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.722125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-344000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-344000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.217334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-344000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-344000 "sudo crictl images -o json": exit status 89 (39.456791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-344000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-344000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-344000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (27.905417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-344000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-344000 --alsologtostderr -v=1: exit status 89 (40.674834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-344000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:49.546746    5007 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:49.546911    5007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:49.546915    5007 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:49.546918    5007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:49.547049    5007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:49.547253    5007 out.go:303] Setting JSON to false
	I1002 03:55:49.547264    5007 mustload.go:65] Loading cluster: embed-certs-344000
	I1002 03:55:49.547449    5007 config.go:182] Loaded profile config "embed-certs-344000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:49.551905    5007 out.go:177] * The control plane node must be running for this command
	I1002 03:55:49.555988    5007 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-344000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-344000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.597542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.393959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-061000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-061000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.829509333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-061000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-061000 in cluster default-k8s-diff-port-061000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-061000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:55:50.257033    5042 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:55:50.257179    5042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:50.257182    5042 out.go:309] Setting ErrFile to fd 2...
	I1002 03:55:50.257185    5042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:55:50.257332    5042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:55:50.258383    5042 out.go:303] Setting JSON to false
	I1002 03:55:50.274440    5042 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1524,"bootTime":1696242626,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:55:50.274546    5042 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:55:50.279178    5042 out.go:177] * [default-k8s-diff-port-061000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:55:50.285060    5042 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:55:50.289102    5042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:55:50.285104    5042 notify.go:220] Checking for updates...
	I1002 03:55:50.295095    5042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:55:50.298126    5042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:55:50.301187    5042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:55:50.302697    5042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:55:50.306483    5042 config.go:182] Loaded profile config "cert-expiration-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:50.306547    5042 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:55:50.306587    5042 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:55:50.311143    5042 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:55:50.316099    5042 start.go:298] selected driver: qemu2
	I1002 03:55:50.316106    5042 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:55:50.316134    5042 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:55:50.318433    5042 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:55:50.321148    5042 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:55:50.324259    5042 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:55:50.324277    5042 cni.go:84] Creating CNI manager for ""
	I1002 03:55:50.324284    5042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:55:50.324291    5042 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:55:50.324295    5042 start_flags.go:321] config:
	{Name:default-k8s-diff-port-061000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-061000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:55:50.328875    5042 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:55:50.336162    5042 out.go:177] * Starting control plane node default-k8s-diff-port-061000 in cluster default-k8s-diff-port-061000
	I1002 03:55:50.340106    5042 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:55:50.340118    5042 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:55:50.340128    5042 cache.go:57] Caching tarball of preloaded images
	I1002 03:55:50.340175    5042 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:55:50.340180    5042 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:55:50.340235    5042 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/default-k8s-diff-port-061000/config.json ...
	I1002 03:55:50.340247    5042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/default-k8s-diff-port-061000/config.json: {Name:mk25455f3172a07ae27e652403f22c58575d7b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:55:50.340460    5042 start.go:365] acquiring machines lock for default-k8s-diff-port-061000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:50.340492    5042 start.go:369] acquired machines lock for "default-k8s-diff-port-061000" in 25.334µs
	I1002 03:55:50.340504    5042 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-061000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:50.340537    5042 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:50.349118    5042 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:50.366207    5042 start.go:159] libmachine.API.Create for "default-k8s-diff-port-061000" (driver="qemu2")
	I1002 03:55:50.366239    5042 client.go:168] LocalClient.Create starting
	I1002 03:55:50.366296    5042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:50.366324    5042 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:50.366334    5042 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:50.366370    5042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:50.366389    5042 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:50.366396    5042 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:50.366748    5042 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:50.478521    5042 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:50.533394    5042 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:50.533405    5042 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:50.533579    5042 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:55:50.542342    5042 main.go:141] libmachine: STDOUT: 
	I1002 03:55:50.542358    5042 main.go:141] libmachine: STDERR: 
	I1002 03:55:50.542405    5042 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2 +20000M
	I1002 03:55:50.549942    5042 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:50.549954    5042 main.go:141] libmachine: STDERR: 
	I1002 03:55:50.549968    5042 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:55:50.549976    5042 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:50.550026    5042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c3:ad:49:6b:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:55:50.551678    5042 main.go:141] libmachine: STDOUT: 
	I1002 03:55:50.551693    5042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:50.551711    5042 client.go:171] LocalClient.Create took 185.47225ms
	I1002 03:55:52.553841    5042 start.go:128] duration metric: createHost completed in 2.213329041s
	I1002 03:55:52.553915    5042 start.go:83] releasing machines lock for "default-k8s-diff-port-061000", held for 2.213459667s
	W1002 03:55:52.553956    5042 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:52.563983    5042 out.go:177] * Deleting "default-k8s-diff-port-061000" in qemu2 ...
	W1002 03:55:52.585530    5042 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:55:52.585564    5042 start.go:703] Will try again in 5 seconds ...
	I1002 03:55:57.587664    5042 start.go:365] acquiring machines lock for default-k8s-diff-port-061000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:55:57.588050    5042 start.go:369] acquired machines lock for "default-k8s-diff-port-061000" in 309.666µs
	I1002 03:55:57.588173    5042 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-061000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:55:57.588442    5042 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:55:57.600036    5042 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:55:57.647949    5042 start.go:159] libmachine.API.Create for "default-k8s-diff-port-061000" (driver="qemu2")
	I1002 03:55:57.647986    5042 client.go:168] LocalClient.Create starting
	I1002 03:55:57.648111    5042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:55:57.648159    5042 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:57.648184    5042 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:57.648249    5042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:55:57.648284    5042 main.go:141] libmachine: Decoding PEM data...
	I1002 03:55:57.648299    5042 main.go:141] libmachine: Parsing certificate...
	I1002 03:55:57.648814    5042 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:55:57.773749    5042 main.go:141] libmachine: Creating SSH key...
	I1002 03:55:57.979595    5042 main.go:141] libmachine: Creating Disk image...
	I1002 03:55:57.979602    5042 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:55:57.979810    5042 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:55:57.989518    5042 main.go:141] libmachine: STDOUT: 
	I1002 03:55:57.989535    5042 main.go:141] libmachine: STDERR: 
	I1002 03:55:57.989597    5042 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2 +20000M
	I1002 03:55:57.997160    5042 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:55:57.997171    5042 main.go:141] libmachine: STDERR: 
	I1002 03:55:57.997187    5042 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:55:57.997202    5042 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:55:57.997236    5042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:00:c8:85:e0:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:55:57.998926    5042 main.go:141] libmachine: STDOUT: 
	I1002 03:55:57.998938    5042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:55:57.998954    5042 client.go:171] LocalClient.Create took 350.97175ms
	I1002 03:56:00.001075    5042 start.go:128] duration metric: createHost completed in 2.412655916s
	I1002 03:56:00.001161    5042 start.go:83] releasing machines lock for "default-k8s-diff-port-061000", held for 2.413136791s
	W1002 03:56:00.001586    5042 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-061000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-061000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:00.008366    5042 out.go:177] 
	W1002 03:56:00.028468    5042 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:56:00.028534    5042 out.go:239] * 
	* 
	W1002 03:56:00.031132    5042 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:56:00.041102    5042 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-061000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (60.906125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-061000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-061000 create -f testdata/busybox.yaml: exit status 1 (29.674084ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-061000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.136292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.546458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-061000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-061000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-061000 describe deploy/metrics-server -n kube-system: exit status 1 (26.25775ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-061000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-061000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (28.0255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-061000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-061000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (7.360711792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-061000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-061000 in cluster default-k8s-diff-port-061000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:56:00.493936    5083 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:56:00.494070    5083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:00.494074    5083 out.go:309] Setting ErrFile to fd 2...
	I1002 03:56:00.494078    5083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:00.494207    5083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:56:00.495224    5083 out.go:303] Setting JSON to false
	I1002 03:56:00.511147    5083 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1534,"bootTime":1696242626,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:56:00.511230    5083 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:56:00.516286    5083 out.go:177] * [default-k8s-diff-port-061000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:56:00.522255    5083 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:56:00.526301    5083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:56:00.522322    5083 notify.go:220] Checking for updates...
	I1002 03:56:00.532280    5083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:56:00.535231    5083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:56:00.538261    5083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:56:00.544188    5083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:56:00.548549    5083 config.go:182] Loaded profile config "default-k8s-diff-port-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:56:00.548802    5083 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:56:00.553272    5083 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:56:00.560233    5083 start.go:298] selected driver: qemu2
	I1002 03:56:00.560241    5083 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-061000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:56:00.560316    5083 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:56:00.562820    5083 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 03:56:00.562845    5083 cni.go:84] Creating CNI manager for ""
	I1002 03:56:00.562854    5083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:56:00.562861    5083 start_flags.go:321] config:
	{Name:default-k8s-diff-port-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-0610
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:56:00.567276    5083 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:56:00.574255    5083 out.go:177] * Starting control plane node default-k8s-diff-port-061000 in cluster default-k8s-diff-port-061000
	I1002 03:56:00.578257    5083 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:56:00.578282    5083 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:56:00.578299    5083 cache.go:57] Caching tarball of preloaded images
	I1002 03:56:00.578348    5083 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:56:00.578354    5083 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:56:00.578428    5083 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/default-k8s-diff-port-061000/config.json ...
	I1002 03:56:00.578819    5083 start.go:365] acquiring machines lock for default-k8s-diff-port-061000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:56:00.578845    5083 start.go:369] acquired machines lock for "default-k8s-diff-port-061000" in 19.667µs
	I1002 03:56:00.578853    5083 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:56:00.578860    5083 fix.go:54] fixHost starting: 
	I1002 03:56:00.578973    5083 fix.go:102] recreateIfNeeded on default-k8s-diff-port-061000: state=Stopped err=<nil>
	W1002 03:56:00.578982    5083 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:56:00.582214    5083 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-061000" ...
	I1002 03:56:00.590114    5083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:00:c8:85:e0:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:56:00.592125    5083 main.go:141] libmachine: STDOUT: 
	I1002 03:56:00.592145    5083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:56:00.592177    5083 fix.go:56] fixHost completed within 13.318666ms
	I1002 03:56:00.592181    5083 start.go:83] releasing machines lock for "default-k8s-diff-port-061000", held for 13.332333ms
	W1002 03:56:00.592186    5083 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:56:00.592224    5083 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:00.592229    5083 start.go:703] Will try again in 5 seconds ...
	I1002 03:56:05.594180    5083 start.go:365] acquiring machines lock for default-k8s-diff-port-061000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:56:07.750971    5083 start.go:369] acquired machines lock for "default-k8s-diff-port-061000" in 2.156801375s
	I1002 03:56:07.751103    5083 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:56:07.751125    5083 fix.go:54] fixHost starting: 
	I1002 03:56:07.751895    5083 fix.go:102] recreateIfNeeded on default-k8s-diff-port-061000: state=Stopped err=<nil>
	W1002 03:56:07.751925    5083 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:56:07.762499    5083 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-061000" ...
	I1002 03:56:07.775718    5083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:00:c8:85:e0:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/default-k8s-diff-port-061000/disk.qcow2
	I1002 03:56:07.785802    5083 main.go:141] libmachine: STDOUT: 
	I1002 03:56:07.785874    5083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:56:07.786002    5083 fix.go:56] fixHost completed within 34.873ms
	I1002 03:56:07.786023    5083 start.go:83] releasing machines lock for "default-k8s-diff-port-061000", held for 34.992875ms
	W1002 03:56:07.786314    5083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:07.793441    5083 out.go:177] 
	W1002 03:56:07.797603    5083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:56:07.797622    5083 out.go:239] * 
	* 
	W1002 03:56:07.799598    5083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:56:07.810606    5083 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-061000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (65.763625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-191000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-191000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.817980375s)

                                                
                                                
-- stdout --
	* [newest-cni-191000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-191000 in cluster newest-cni-191000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-191000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:56:05.297332    5101 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:56:05.297474    5101 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:05.297481    5101 out.go:309] Setting ErrFile to fd 2...
	I1002 03:56:05.297485    5101 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:05.297619    5101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:56:05.298660    5101 out.go:303] Setting JSON to false
	I1002 03:56:05.314804    5101 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1539,"bootTime":1696242626,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:56:05.314891    5101 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:56:05.320530    5101 out.go:177] * [newest-cni-191000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:56:05.328504    5101 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:56:05.331381    5101 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:56:05.328552    5101 notify.go:220] Checking for updates...
	I1002 03:56:05.334463    5101 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:56:05.337437    5101 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:56:05.338787    5101 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:56:05.341466    5101 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:56:05.344769    5101 config.go:182] Loaded profile config "default-k8s-diff-port-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:56:05.344835    5101 config.go:182] Loaded profile config "multinode-335000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:56:05.344889    5101 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:56:05.349294    5101 out.go:177] * Using the qemu2 driver based on user configuration
	I1002 03:56:05.356393    5101 start.go:298] selected driver: qemu2
	I1002 03:56:05.356400    5101 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:56:05.356406    5101 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:56:05.358512    5101 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W1002 03:56:05.358538    5101 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 03:56:05.366440    5101 out.go:177] * Automatically selected the socket_vmnet network
	I1002 03:56:05.369558    5101 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 03:56:05.369584    5101 cni.go:84] Creating CNI manager for ""
	I1002 03:56:05.369600    5101 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:56:05.369606    5101 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 03:56:05.369612    5101 start_flags.go:321] config:
	{Name:newest-cni-191000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-191000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:56:05.373952    5101 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:56:05.381443    5101 out.go:177] * Starting control plane node newest-cni-191000 in cluster newest-cni-191000
	I1002 03:56:05.385476    5101 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:56:05.385492    5101 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:56:05.385504    5101 cache.go:57] Caching tarball of preloaded images
	I1002 03:56:05.385561    5101 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:56:05.385566    5101 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:56:05.385631    5101 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/newest-cni-191000/config.json ...
	I1002 03:56:05.385642    5101 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/newest-cni-191000/config.json: {Name:mk7c62d9c76d291a40451b630687dc1663b27d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:56:05.385840    5101 start.go:365] acquiring machines lock for newest-cni-191000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:56:05.385868    5101 start.go:369] acquired machines lock for "newest-cni-191000" in 22.416µs
	I1002 03:56:05.385877    5101 start.go:93] Provisioning new machine with config: &{Name:newest-cni-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-191000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:56:05.385934    5101 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:56:05.394475    5101 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:56:05.410524    5101 start.go:159] libmachine.API.Create for "newest-cni-191000" (driver="qemu2")
	I1002 03:56:05.410555    5101 client.go:168] LocalClient.Create starting
	I1002 03:56:05.410608    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:56:05.410632    5101 main.go:141] libmachine: Decoding PEM data...
	I1002 03:56:05.410642    5101 main.go:141] libmachine: Parsing certificate...
	I1002 03:56:05.410679    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:56:05.410703    5101 main.go:141] libmachine: Decoding PEM data...
	I1002 03:56:05.410709    5101 main.go:141] libmachine: Parsing certificate...
	I1002 03:56:05.411054    5101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:56:05.578217    5101 main.go:141] libmachine: Creating SSH key...
	I1002 03:56:05.730513    5101 main.go:141] libmachine: Creating Disk image...
	I1002 03:56:05.730521    5101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:56:05.730698    5101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:05.739537    5101 main.go:141] libmachine: STDOUT: 
	I1002 03:56:05.739556    5101 main.go:141] libmachine: STDERR: 
	I1002 03:56:05.739608    5101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2 +20000M
	I1002 03:56:05.746977    5101 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:56:05.746994    5101 main.go:141] libmachine: STDERR: 
	I1002 03:56:05.747007    5101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:05.747013    5101 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:56:05.747053    5101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:48:a5:55:db:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:05.748597    5101 main.go:141] libmachine: STDOUT: 
	I1002 03:56:05.748613    5101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:56:05.748629    5101 client.go:171] LocalClient.Create took 338.077292ms
	I1002 03:56:07.750764    5101 start.go:128] duration metric: createHost completed in 2.364855875s
	I1002 03:56:07.750832    5101 start.go:83] releasing machines lock for "newest-cni-191000", held for 2.365003834s
	W1002 03:56:07.750912    5101 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:07.772551    5101 out.go:177] * Deleting "newest-cni-191000" in qemu2 ...
	W1002 03:56:07.803433    5101 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:07.803475    5101 start.go:703] Will try again in 5 seconds ...
	I1002 03:56:12.805187    5101 start.go:365] acquiring machines lock for newest-cni-191000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:56:12.805620    5101 start.go:369] acquired machines lock for "newest-cni-191000" in 309µs
	I1002 03:56:12.805750    5101 start.go:93] Provisioning new machine with config: &{Name:newest-cni-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-191000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 03:56:12.805986    5101 start.go:125] createHost starting for "" (driver="qemu2")
	I1002 03:56:12.811714    5101 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 03:56:12.863039    5101 start.go:159] libmachine.API.Create for "newest-cni-191000" (driver="qemu2")
	I1002 03:56:12.863079    5101 client.go:168] LocalClient.Create starting
	I1002 03:56:12.863210    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/ca.pem
	I1002 03:56:12.863263    5101 main.go:141] libmachine: Decoding PEM data...
	I1002 03:56:12.863291    5101 main.go:141] libmachine: Parsing certificate...
	I1002 03:56:12.863377    5101 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-994/.minikube/certs/cert.pem
	I1002 03:56:12.863416    5101 main.go:141] libmachine: Decoding PEM data...
	I1002 03:56:12.863433    5101 main.go:141] libmachine: Parsing certificate...
	I1002 03:56:12.864013    5101 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17340-994/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1002 03:56:12.988290    5101 main.go:141] libmachine: Creating SSH key...
	I1002 03:56:13.029277    5101 main.go:141] libmachine: Creating Disk image...
	I1002 03:56:13.029286    5101 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1002 03:56:13.029448    5101 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2.raw /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:13.038304    5101 main.go:141] libmachine: STDOUT: 
	I1002 03:56:13.038319    5101 main.go:141] libmachine: STDERR: 
	I1002 03:56:13.038371    5101 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2 +20000M
	I1002 03:56:13.045800    5101 main.go:141] libmachine: STDOUT: Image resized.
	
	I1002 03:56:13.045812    5101 main.go:141] libmachine: STDERR: 
	I1002 03:56:13.045825    5101 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:13.045831    5101 main.go:141] libmachine: Starting QEMU VM...
	I1002 03:56:13.045877    5101 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2c:83:6d:13:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:13.047550    5101 main.go:141] libmachine: STDOUT: 
	I1002 03:56:13.047562    5101 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:56:13.047575    5101 client.go:171] LocalClient.Create took 184.495375ms
	I1002 03:56:15.049712    5101 start.go:128] duration metric: createHost completed in 2.243745417s
	I1002 03:56:15.049793    5101 start.go:83] releasing machines lock for "newest-cni-191000", held for 2.244196208s
	W1002 03:56:15.050312    5101 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:15.060984    5101 out.go:177] 
	W1002 03:56:15.065031    5101 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:56:15.065065    5101 out.go:239] * 
	* 
	W1002 03:56:15.067857    5101 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:56:15.076916    5101 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-191000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000: exit status 7 (66.693875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-191000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-061000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (31.587958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-061000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-061000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-061000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.962583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-061000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-061000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.772208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-061000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-061000 "sudo crictl images -o json": exit status 89 (39.478791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-061000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-061000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-061000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.654166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-061000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-061000 --alsologtostderr -v=1: exit status 89 (38.974125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-061000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:56:08.077738    5123 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:56:08.077917    5123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:08.077920    5123 out.go:309] Setting ErrFile to fd 2...
	I1002 03:56:08.077923    5123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:08.078055    5123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:56:08.078264    5123 out.go:303] Setting JSON to false
	I1002 03:56:08.078277    5123 mustload.go:65] Loading cluster: default-k8s-diff-port-061000
	I1002 03:56:08.078470    5123 config.go:182] Loaded profile config "default-k8s-diff-port-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:56:08.082100    5123 out.go:177] * The control plane node must be running for this command
	I1002 03:56:08.086129    5123 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-061000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-061000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.945958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.624709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-191000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-191000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.176530834s)

                                                
                                                
-- stdout --
	* [newest-cni-191000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-191000 in cluster newest-cni-191000
	* Restarting existing qemu2 VM for "newest-cni-191000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-191000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:56:15.394034    5164 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:56:15.394195    5164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:15.394198    5164 out.go:309] Setting ErrFile to fd 2...
	I1002 03:56:15.394201    5164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:15.394346    5164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:56:15.395378    5164 out.go:303] Setting JSON to false
	I1002 03:56:15.411294    5164 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1549,"bootTime":1696242626,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:56:15.411379    5164 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:56:15.416386    5164 out.go:177] * [newest-cni-191000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:56:15.422383    5164 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:56:15.422439    5164 notify.go:220] Checking for updates...
	I1002 03:56:15.426409    5164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:56:15.429352    5164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:56:15.432339    5164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:56:15.435363    5164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:56:15.438320    5164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:56:15.441600    5164 config.go:182] Loaded profile config "newest-cni-191000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:56:15.441849    5164 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:56:15.446354    5164 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:56:15.453301    5164 start.go:298] selected driver: qemu2
	I1002 03:56:15.453307    5164 start.go:902] validating driver "qemu2" against &{Name:newest-cni-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-191000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:56:15.453361    5164 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:56:15.455717    5164 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 03:56:15.455739    5164 cni.go:84] Creating CNI manager for ""
	I1002 03:56:15.455747    5164 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:56:15.455753    5164 start_flags.go:321] config:
	{Name:newest-cni-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-191000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:56:15.460034    5164 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:56:15.467310    5164 out.go:177] * Starting control plane node newest-cni-191000 in cluster newest-cni-191000
	I1002 03:56:15.471384    5164 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:56:15.471397    5164 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:56:15.471409    5164 cache.go:57] Caching tarball of preloaded images
	I1002 03:56:15.471459    5164 preload.go:174] Found /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 03:56:15.471466    5164 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:56:15.471530    5164 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/newest-cni-191000/config.json ...
	I1002 03:56:15.471985    5164 start.go:365] acquiring machines lock for newest-cni-191000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:56:15.472010    5164 start.go:369] acquired machines lock for "newest-cni-191000" in 17.917µs
	I1002 03:56:15.472021    5164 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:56:15.472025    5164 fix.go:54] fixHost starting: 
	I1002 03:56:15.472136    5164 fix.go:102] recreateIfNeeded on newest-cni-191000: state=Stopped err=<nil>
	W1002 03:56:15.472145    5164 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:56:15.475318    5164 out.go:177] * Restarting existing qemu2 VM for "newest-cni-191000" ...
	I1002 03:56:15.483355    5164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2c:83:6d:13:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:15.485345    5164 main.go:141] libmachine: STDOUT: 
	I1002 03:56:15.485364    5164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:56:15.485395    5164 fix.go:56] fixHost completed within 13.37025ms
	I1002 03:56:15.485400    5164 start.go:83] releasing machines lock for "newest-cni-191000", held for 13.3835ms
	W1002 03:56:15.485406    5164 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:56:15.485440    5164 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:15.485444    5164 start.go:703] Will try again in 5 seconds ...
	I1002 03:56:20.487692    5164 start.go:365] acquiring machines lock for newest-cni-191000: {Name:mk8240c8ab9eff6dc1f9c13e10508d2963fe73d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 03:56:20.488112    5164 start.go:369] acquired machines lock for "newest-cni-191000" in 332.292µs
	I1002 03:56:20.488298    5164 start.go:96] Skipping create...Using existing machine configuration
	I1002 03:56:20.488319    5164 fix.go:54] fixHost starting: 
	I1002 03:56:20.489027    5164 fix.go:102] recreateIfNeeded on newest-cni-191000: state=Stopped err=<nil>
	W1002 03:56:20.489054    5164 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 03:56:20.494648    5164 out.go:177] * Restarting existing qemu2 VM for "newest-cni-191000" ...
	I1002 03:56:20.499823    5164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2c:83:6d:13:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17340-994/.minikube/machines/newest-cni-191000/disk.qcow2
	I1002 03:56:20.509613    5164 main.go:141] libmachine: STDOUT: 
	I1002 03:56:20.509667    5164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1002 03:56:20.509793    5164 fix.go:56] fixHost completed within 21.4765ms
	I1002 03:56:20.509813    5164 start.go:83] releasing machines lock for "newest-cni-191000", held for 21.678333ms
	W1002 03:56:20.509984    5164 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-191000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-191000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1002 03:56:20.517554    5164 out.go:177] 
	W1002 03:56:20.521431    5164 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1002 03:56:20.521455    5164 out.go:239] * 
	* 
	W1002 03:56:20.524004    5164 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:56:20.531529    5164 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-191000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000: exit status 7 (66.648542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-191000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-191000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-191000 "sudo crictl images -o json": exit status 89 (44.122833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-191000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-191000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-191000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000: exit status 7 (28.801875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-191000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-191000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-191000 --alsologtostderr -v=1: exit status 89 (42.85975ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-191000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:56:20.712359    5178 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:56:20.712543    5178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:20.712546    5178 out.go:309] Setting ErrFile to fd 2...
	I1002 03:56:20.712548    5178 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:56:20.712668    5178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:56:20.712870    5178 out.go:303] Setting JSON to false
	I1002 03:56:20.712877    5178 mustload.go:65] Loading cluster: newest-cni-191000
	I1002 03:56:20.713073    5178 config.go:182] Loaded profile config "newest-cni-191000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:56:20.715952    5178 out.go:177] * The control plane node must be running for this command
	I1002 03:56:20.723818    5178 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-191000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-191000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000: exit status 7 (28.32475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-191000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000: exit status 7 (28.430542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-191000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.2/json-events 7.05
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.25
19 TestBinaryMirror 0.38
30 TestHyperKitDriverInstallOrUpdate 8.05
34 TestErrorSpam/start 0.34
35 TestErrorSpam/status 0.22
36 TestErrorSpam/pause 4.68
37 TestErrorSpam/unpause 5.32
38 TestErrorSpam/stop 108.38
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 45.6
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 32.09
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.05
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
50 TestFunctional/serial/CacheCmd/cache/add_local 1.25
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.91
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.41
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
58 TestFunctional/serial/ExtraConfig 37.64
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.66
61 TestFunctional/serial/LogsFileCmd 0.63
62 TestFunctional/serial/InvalidService 4.32
64 TestFunctional/parallel/ConfigCmd 0.21
65 TestFunctional/parallel/DashboardCmd 9.57
66 TestFunctional/parallel/DryRun 0.22
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.23
73 TestFunctional/parallel/AddonsCmd 0.18
74 TestFunctional/parallel/PersistentVolumeClaim 25.56
76 TestFunctional/parallel/SSHCmd 0.13
77 TestFunctional/parallel/CpCmd 0.28
79 TestFunctional/parallel/FileSync 0.06
80 TestFunctional/parallel/CertSync 0.39
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
88 TestFunctional/parallel/License 0.2
89 TestFunctional/parallel/Version/short 0.04
90 TestFunctional/parallel/Version/components 0.21
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
92 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
93 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
94 TestFunctional/parallel/ImageCommands/ImageListYaml 0.1
95 TestFunctional/parallel/ImageCommands/ImageBuild 2.52
96 TestFunctional/parallel/ImageCommands/Setup 1.71
97 TestFunctional/parallel/DockerEnv/bash 0.39
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
101 TestFunctional/parallel/ServiceCmd/DeployApp 11.11
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.22
103 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.53
104 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.46
105 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
106 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
107 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
108 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.12
114 TestFunctional/parallel/ServiceCmd/List 0.09
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
117 TestFunctional/parallel/ServiceCmd/Format 0.1
118 TestFunctional/parallel/ServiceCmd/URL 0.1
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
126 TestFunctional/parallel/ProfileCmd/profile_list 0.14
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
128 TestFunctional/parallel/MountCmd/any-port 5.43
129 TestFunctional/parallel/MountCmd/specific-port 0.81
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 29.94
138 TestImageBuild/serial/NormalBuild 1.04
140 TestImageBuild/serial/BuildWithDockerIgnore 0.18
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 74.29
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.35
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.25
151 TestJSONOutput/start/Command 83.02
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.27
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.2
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 12.07
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.36
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 61.69
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
243 TestNoKubernetes/serial/ProfileList 0.14
244 TestNoKubernetes/serial/Stop 0.06
246 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
260 TestStartStop/group/old-k8s-version/serial/Stop 0.06
261 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
271 TestStartStop/group/no-preload/serial/Stop 0.06
272 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
282 TestStartStop/group/embed-certs/serial/Stop 0.06
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
293 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-679000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-679000: exit status 85 (92.848208ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-679000 | jenkins | v1.31.2 | 02 Oct 23 03:34 PDT |          |
	|         | -p download-only-679000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:34:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:34:59.823596    1411 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:34:59.823742    1411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:34:59.823745    1411 out.go:309] Setting ErrFile to fd 2...
	I1002 03:34:59.823748    1411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:34:59.823857    1411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	W1002 03:34:59.823946    1411 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17340-994/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17340-994/.minikube/config/config.json: no such file or directory
	I1002 03:34:59.825093    1411 out.go:303] Setting JSON to true
	I1002 03:34:59.842620    1411 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":272,"bootTime":1696242627,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:34:59.842721    1411 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:34:59.851040    1411 out.go:97] [download-only-679000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:34:59.854079    1411 out.go:169] MINIKUBE_LOCATION=17340
	I1002 03:34:59.851196    1411 notify.go:220] Checking for updates...
	W1002 03:34:59.851189    1411 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 03:34:59.863076    1411 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:34:59.870983    1411 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:34:59.878871    1411 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:34:59.882005    1411 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	W1002 03:34:59.888091    1411 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 03:34:59.888306    1411 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:34:59.893023    1411 out.go:97] Using the qemu2 driver based on user configuration
	I1002 03:34:59.893029    1411 start.go:298] selected driver: qemu2
	I1002 03:34:59.893032    1411 start.go:902] validating driver "qemu2" against <nil>
	I1002 03:34:59.893094    1411 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:34:59.896989    1411 out.go:169] Automatically selected the socket_vmnet network
	I1002 03:34:59.903037    1411 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1002 03:34:59.903118    1411 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:34:59.903190    1411 cni.go:84] Creating CNI manager for ""
	I1002 03:34:59.903210    1411 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:34:59.903215    1411 start_flags.go:321] config:
	{Name:download-only-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-679000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:34:59.909298    1411 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:34:59.913079    1411 out.go:97] Downloading VM boot image ...
	I1002 03:34:59.913122    1411 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I1002 03:35:04.218339    1411 out.go:97] Starting control plane node download-only-679000 in cluster download-only-679000
	I1002 03:35:04.218364    1411 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:35:04.268459    1411 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:35:04.268479    1411 cache.go:57] Caching tarball of preloaded images
	I1002 03:35:04.268615    1411 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:35:04.273281    1411 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 03:35:04.273287    1411 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:04.351307    1411 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1002 03:35:09.228255    1411 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:09.228400    1411 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:09.869196    1411 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1002 03:35:09.869391    1411 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/download-only-679000/config.json ...
	I1002 03:35:09.869410    1411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/download-only-679000/config.json: {Name:mk9ffb537985013462866bd2ba05410dfae7c50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 03:35:09.869638    1411 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:35:09.869891    1411 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1002 03:35:10.271642    1411 out.go:169] 
	W1002 03:35:10.275607    1411 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17340-994/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880 0x103bd1880] Decompressors:map[bz2:0x14000196a20 gz:0x14000196a28 tar:0x14000196990 tar.bz2:0x140001969a0 tar.gz:0x140001969b0 tar.xz:0x140001969d0 tar.zst:0x140001969e0 tbz2:0x140001969a0 tgz:0x140001969b0 txz:0x140001969d0 tzst:0x140001969e0 xz:0x14000196a30 zip:0x14000196a40 zst:0x14000196a38] Getters:map[file:0x14000708910 http:0x140004a2640 https:0x140004a2690] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1002 03:35:10.275635    1411 out_reason.go:110] 
	W1002 03:35:10.281645    1411 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 03:35:10.285627    1411 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-679000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (7.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-679000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-679000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 : (7.053037666s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (7.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-679000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-679000: exit status 85 (78.304167ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-679000 | jenkins | v1.31.2 | 02 Oct 23 03:34 PDT |          |
	|         | -p download-only-679000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-679000 | jenkins | v1.31.2 | 02 Oct 23 03:35 PDT |          |
	|         | -p download-only-679000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:35:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:35:10.468557    1424 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:35:10.468692    1424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:35:10.468695    1424 out.go:309] Setting ErrFile to fd 2...
	I1002 03:35:10.468697    1424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:35:10.468834    1424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	W1002 03:35:10.468901    1424 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17340-994/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17340-994/.minikube/config/config.json: no such file or directory
	I1002 03:35:10.469783    1424 out.go:303] Setting JSON to true
	I1002 03:35:10.485646    1424 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":283,"bootTime":1696242627,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:35:10.485735    1424 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:35:10.490139    1424 out.go:97] [download-only-679000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:35:10.494161    1424 out.go:169] MINIKUBE_LOCATION=17340
	I1002 03:35:10.490216    1424 notify.go:220] Checking for updates...
	I1002 03:35:10.501221    1424 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:35:10.504223    1424 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:35:10.507267    1424 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:35:10.510162    1424 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	W1002 03:35:10.516171    1424 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 03:35:10.516455    1424 config.go:182] Loaded profile config "download-only-679000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1002 03:35:10.516490    1424 start.go:810] api.Load failed for download-only-679000: filestore "download-only-679000": Docker machine "download-only-679000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 03:35:10.516536    1424 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 03:35:10.516554    1424 start.go:810] api.Load failed for download-only-679000: filestore "download-only-679000": Docker machine "download-only-679000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 03:35:10.520123    1424 out.go:97] Using the qemu2 driver based on existing profile
	I1002 03:35:10.520130    1424 start.go:298] selected driver: qemu2
	I1002 03:35:10.520133    1424 start.go:902] validating driver "qemu2" against &{Name:download-only-679000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-679000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:35:10.522427    1424 cni.go:84] Creating CNI manager for ""
	I1002 03:35:10.522440    1424 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:35:10.522447    1424 start_flags.go:321] config:
	{Name:download-only-679000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-679000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:35:10.526487    1424 iso.go:125] acquiring lock: {Name:mke5bbece44fc1c18af5ed8d18cea755c5b0301b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:35:10.529191    1424 out.go:97] Starting control plane node download-only-679000 in cluster download-only-679000
	I1002 03:35:10.529199    1424 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:35:10.590040    1424 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:35:10.590059    1424 cache.go:57] Caching tarball of preloaded images
	I1002 03:35:10.590217    1424 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:35:10.595114    1424 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 03:35:10.595123    1424 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:10.673391    1424 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4?checksum=md5:48f32a2a1ca4194a6d2a21c3ded2b2db -> /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1002 03:35:15.453598    1424 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I1002 03:35:15.453778    1424 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17340-994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-679000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-679000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-047000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-047000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-047000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.05s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status: exit status 6 (74.297583ms)

                                                
                                                
-- stdout --
	nospam-755000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 03:35:55.668821    1549 status.go:415] kubeconfig endpoint: extract IP: "nospam-755000" does not appear in /Users/jenkins/minikube-integration/17340-994/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status: exit status 6 (73.217583ms)

                                                
                                                
-- stdout --
	nospam-755000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 03:35:55.742320    1551 status.go:415] kubeconfig endpoint: extract IP: "nospam-755000" does not appear in /Users/jenkins/minikube-integration/17340-994/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status: exit status 6 (75.210459ms)

                                                
                                                
-- stdout --
	nospam-755000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 03:35:55.817695    1553 status.go:415] kubeconfig endpoint: extract IP: "nospam-755000" does not appear in /Users/jenkins/minikube-integration/17340-994/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.22s)

                                                
                                    
x
+
TestErrorSpam/pause (4.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause: exit status 80 (1.870257334s)

                                                
                                                
-- stdout --
	* Pausing node nospam-755000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause: exit status 80 (1.506299375s)

                                                
                                                
-- stdout --
	* Pausing node nospam-755000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause: exit status 80 (1.303679459s)

                                                
                                                
-- stdout --
	* Pausing node nospam-755000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (4.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause: exit status 80 (1.698723542s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-755000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause: exit status 80 (1.775415833s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-755000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause: exit status 80 (1.840858s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-755000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.32s)

                                                
                                    
x
+
TestErrorSpam/stop (108.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 stop: (1m48.217504541s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-755000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-755000 stop
--- PASS: TestErrorSpam/stop (108.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17340-994/.minikube/files/etc/test/nested/copy/1409/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-680000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-680000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.60268475s)
--- PASS: TestFunctional/serial/StartWithProxy (45.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-680000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-680000 --alsologtostderr -v=8: (32.092176708s)
functional_test.go:659: soft start took 32.092614584s for "functional-680000" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-680000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 cache add registry.k8s.io/pause:3.1: (1.198504417s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 cache add registry.k8s.io/pause:3.3: (1.156924875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 cache add registry.k8s.io/pause:latest: (1.087646833s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3127254281/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cache add minikube-local-cache-test:functional-680000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cache delete minikube-local-cache-test:functional-680000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-680000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.264333ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 kubectl -- --context functional-680000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-680000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-680000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-680000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.637868s)
functional_test.go:757: restart took 37.637970958s for "functional-680000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-680000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1781438562/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-680000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-680000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-680000: exit status 115 (101.167584ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30878 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-680000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-680000 delete -f testdata/invalidsvc.yaml: (1.094116542s)
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 config get cpus: exit status 14 (28.843791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 config get cpus: exit status 14 (29.162667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-680000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-680000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2241: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-680000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-680000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.445666ms)

                                                
                                                
-- stdout --
	* [functional-680000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:40:54.892708    2218 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:40:54.892867    2218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:54.892870    2218 out.go:309] Setting ErrFile to fd 2...
	I1002 03:40:54.892873    2218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:54.893017    2218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:40:54.894035    2218 out.go:303] Setting JSON to false
	I1002 03:40:54.912095    2218 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":628,"bootTime":1696242626,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:40:54.912200    2218 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:40:54.917544    2218 out.go:177] * [functional-680000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1002 03:40:54.923522    2218 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:40:54.927541    2218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:40:54.923555    2218 notify.go:220] Checking for updates...
	I1002 03:40:54.933446    2218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:40:54.936537    2218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:40:54.939567    2218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:40:54.942548    2218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:40:54.945769    2218 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:40:54.945999    2218 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:40:54.950535    2218 out.go:177] * Using the qemu2 driver based on existing profile
	I1002 03:40:54.957509    2218 start.go:298] selected driver: qemu2
	I1002 03:40:54.957514    2218 start.go:902] validating driver "qemu2" against &{Name:functional-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:40:54.957562    2218 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:40:54.963371    2218 out.go:177] 
	W1002 03:40:54.967519    2218 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 03:40:54.970562    2218 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-680000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-680000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-680000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (106.865375ms)

                                                
                                                
-- stdout --
	* [functional-680000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:40:55.106806    2229 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:40:55.106934    2229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:55.106937    2229 out.go:309] Setting ErrFile to fd 2...
	I1002 03:40:55.106940    2229 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:55.107063    2229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
	I1002 03:40:55.108419    2229 out.go:303] Setting JSON to false
	I1002 03:40:55.125525    2229 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":629,"bootTime":1696242626,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1002 03:40:55.125631    2229 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:40:55.129616    2229 out.go:177] * [functional-680000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	I1002 03:40:55.135547    2229 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:40:55.139566    2229 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	I1002 03:40:55.135613    2229 notify.go:220] Checking for updates...
	I1002 03:40:55.144522    2229 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1002 03:40:55.147571    2229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:40:55.150532    2229 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	I1002 03:40:55.153507    2229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:40:55.156869    2229 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:40:55.157102    2229 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:40:55.161597    2229 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1002 03:40:55.168556    2229 start.go:298] selected driver: qemu2
	I1002 03:40:55.168563    2229 start.go:902] validating driver "qemu2" against &{Name:functional-680000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-680000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:40:55.168609    2229 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:40:55.175518    2229 out.go:177] 
	W1002 03:40:55.179578    2229 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 03:40:55.183521    2229 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8446c9ef-c480-496c-81b0-1bbda663315f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005598791s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-680000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-680000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-680000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-680000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fba785f-cc34-40d0-b405-4b83cfe64de1] Pending
helpers_test.go:344: "sp-pod" [2fba785f-cc34-40d0-b405-4b83cfe64de1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fba785f-cc34-40d0-b405-4b83cfe64de1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009271292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-680000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-680000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-680000 delete -f testdata/storage-provisioner/pod.yaml: (1.046027208s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-680000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [efd1dd1f-d23f-4ee6-ae86-e82f8bcf291d] Pending
helpers_test.go:344: "sp-pod" [efd1dd1f-d23f-4ee6-ae86-e82f8bcf291d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [efd1dd1f-d23f-4ee6-ae86-e82f8bcf291d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.007552083s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-680000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh -n functional-680000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 cp functional-680000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3742354933/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh -n functional-680000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1409/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /etc/test/nested/copy/1409/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1409.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /etc/ssl/certs/1409.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1409.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /usr/share/ca-certificates/1409.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /etc/ssl/certs/14092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /usr/share/ca-certificates/14092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-680000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "sudo systemctl is-active crio": exit status 1 (60.2485ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-680000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-680000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-680000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-680000 image ls --format short --alsologtostderr:
I1002 03:40:59.980283    2263 out.go:296] Setting OutFile to fd 1 ...
I1002 03:40:59.980701    2263 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:40:59.980708    2263 out.go:309] Setting ErrFile to fd 2...
I1002 03:40:59.980710    2263 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:40:59.980858    2263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
I1002 03:40:59.981322    2263 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:40:59.981390    2263 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:40:59.982269    2263 ssh_runner.go:195] Run: systemctl --version
I1002 03:40:59.982280    2263 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
I1002 03:41:00.009140    2263 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-680000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-680000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | 30bb499447fe1 | 120MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 64fc40cee3716 | 57.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | alpine            | df8fd1ca35d66 | 43.5MB |
| registry.k8s.io/kube-proxy                  | v1.28.2           | 7da62c127fc0f | 68.3MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-680000 | 9f319d496eaf7 | 30B    |
| docker.io/library/nginx                     | latest            | 2a4fbb36e9660 | 192MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 89d57b83c1786 | 116MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-680000 image ls --format table --alsologtostderr:
I1002 03:41:00.228358    2269 out.go:296] Setting OutFile to fd 1 ...
I1002 03:41:00.228530    2269 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.228534    2269 out.go:309] Setting ErrFile to fd 2...
I1002 03:41:00.228536    2269 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.228690    2269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
I1002 03:41:00.229179    2269 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.229237    2269 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.230045    2269 ssh_runner.go:195] Run: systemctl --version
I1002 03:41:00.230057    2269 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
I1002 03:41:00.258014    2269 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-680000 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43500000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","rep
oDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"57800000"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"116000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"9f319d496eaf7a66009a4f3c802237d914ca26c7bf815bf534336e430437e304","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-680000"],"size":"30"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"120000000"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":[],"repoTags":["regist
ry.k8s.io/kube-proxy:v1.28.2"],"size":"68300000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-680000"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-680000 image ls --format json --alsologtostderr:
I1002 03:41:00.054286    2265 out.go:296] Setting OutFile to fd 1 ...
I1002 03:41:00.054461    2265 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.054464    2265 out.go:309] Setting ErrFile to fd 2...
I1002 03:41:00.054467    2265 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.054607    2265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
I1002 03:41:00.055045    2265 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.055106    2265 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.055960    2265 ssh_runner.go:195] Run: systemctl --version
I1002 03:41:00.055972    2265 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
I1002 03:41:00.082483    2265 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-680000 image ls --format yaml --alsologtostderr:
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "120000000"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "116000000"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "57800000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 9f319d496eaf7a66009a4f3c802237d914ca26c7bf815bf534336e430437e304
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-680000
size: "30"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43500000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "68300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-680000
size: "32900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-680000 image ls --format yaml --alsologtostderr:
I1002 03:41:00.130570    2267 out.go:296] Setting OutFile to fd 1 ...
I1002 03:41:00.130771    2267 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.130775    2267 out.go:309] Setting ErrFile to fd 2...
I1002 03:41:00.130778    2267 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.130933    2267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
I1002 03:41:00.131403    2267 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.131465    2267 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.132366    2267 ssh_runner.go:195] Run: systemctl --version
I1002 03:41:00.132385    2267 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
I1002 03:41:00.159031    2267 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh pgrep buildkitd: exit status 1 (58.694166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image build -t localhost/my-image:functional-680000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 image build -t localhost/my-image:functional-680000 testdata/build --alsologtostderr: (2.376976833s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-680000 image build -t localhost/my-image:functional-680000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 0a350f8aa5c8
Removing intermediate container 0a350f8aa5c8
---> 234cadb5ca00
Step 3/3 : ADD content.txt /
---> 12abf30c539b
Successfully built 12abf30c539b
Successfully tagged localhost/my-image:functional-680000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-680000 image build -t localhost/my-image:functional-680000 testdata/build --alsologtostderr:
I1002 03:41:00.360086    2273 out.go:296] Setting OutFile to fd 1 ...
I1002 03:41:00.360377    2273 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.360385    2273 out.go:309] Setting ErrFile to fd 2...
I1002 03:41:00.360388    2273 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:41:00.360519    2273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-994/.minikube/bin
I1002 03:41:00.360994    2273 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.361374    2273 config.go:182] Loaded profile config "functional-680000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:41:00.362266    2273 ssh_runner.go:195] Run: systemctl --version
I1002 03:41:00.362276    2273 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-994/.minikube/machines/functional-680000/id_rsa Username:docker}
I1002 03:41:00.388312    2273 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2981500200.tar
I1002 03:41:00.388368    2273 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 03:41:00.392107    2273 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2981500200.tar
I1002 03:41:00.393804    2273 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2981500200.tar: stat -c "%s %y" /var/lib/minikube/build/build.2981500200.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2981500200.tar': No such file or directory
I1002 03:41:00.393823    2273 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2981500200.tar --> /var/lib/minikube/build/build.2981500200.tar (3072 bytes)
I1002 03:41:00.401158    2273 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2981500200
I1002 03:41:00.405171    2273 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2981500200 -xf /var/lib/minikube/build/build.2981500200.tar
I1002 03:41:00.408292    2273 docker.go:340] Building image: /var/lib/minikube/build/build.2981500200
I1002 03:41:00.408336    2273 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-680000 /var/lib/minikube/build/build.2981500200
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1002 03:41:02.597525    2273 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-680000 /var/lib/minikube/build/build.2981500200: (2.189216375s)
I1002 03:41:02.597612    2273 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2981500200
I1002 03:41:02.602415    2273 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2981500200.tar
I1002 03:41:02.605659    2273 build_images.go:207] Built localhost/my-image:functional-680000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2981500200.tar
I1002 03:41:02.605674    2273 build_images.go:123] succeeded building to: functional-680000
I1002 03:41:02.605676    2273 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls
2023/10/02 03:41:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.616836708s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-680000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-680000 docker-env) && out/minikube-darwin-arm64 status -p functional-680000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-680000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-680000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-680000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-9jzx9" [fad13bf5-4ef5-4d4c-856a-085c5527db25] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-9jzx9" [fad13bf5-4ef5-4d4c-856a-085c5527db25] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.017505458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image load --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 image load --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr: (2.109460458s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image load --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 image load --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr: (1.459770708s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.48108375s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-680000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image load --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-680000 image load --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr: (1.854667375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image save gcr.io/google-containers/addon-resizer:functional-680000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image rm gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-680000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 image save --daemon gcr.io/google-containers/addon-resizer:functional-680000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-680000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-680000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-680000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-680000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-680000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2036: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-680000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-680000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c019496c-8e19-4ed5-a1e8-ec93c8ee8e2e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c019496c-8e19-4ed5-a1e8-ec93c8ee8e2e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.007052416s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 service list -o json
functional_test.go:1493: Took "85.469958ms" to run "out/minikube-darwin-arm64 -p functional-680000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31613
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31613
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-680000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.98.69 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-680000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "107.69775ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.780042ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "110.044958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "31.628541ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3818490988/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696243241145095000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3818490988/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696243241145095000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3818490988/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696243241145095000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3818490988/001/test-1696243241145095000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.699917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 10:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 10:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 10:40 test-1696243241145095000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh cat /mount-9p/test-1696243241145095000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-680000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [68ce12de-1774-4560-9092-c9cfd6b1b59a] Pending
helpers_test.go:344: "busybox-mount" [68ce12de-1774-4560-9092-c9cfd6b1b59a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [68ce12de-1774-4560-9092-c9cfd6b1b59a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [68ce12de-1774-4560-9092-c9cfd6b1b59a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007673708s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-680000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3818490988/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port531353657/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.347917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port531353657/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "sudo umount -f /mount-9p": exit status 1 (59.75125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-680000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port531353657/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.81s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-680000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-680000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-680000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-330000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-330000 --driver=qemu2 : (29.938702041s)
--- PASS: TestImageBuild/serial/Setup (29.94s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-330000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-330000: (1.0379955s)
--- PASS: TestImageBuild/serial/NormalBuild (1.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-330000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-330000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (74.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-545000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-545000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m14.292580209s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (74.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons enable ingress --alsologtostderr -v=5: (15.349703292s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-545000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-963000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1002 03:45:03.394764    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:03.400856    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:03.411648    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:03.433691    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:03.475726    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:03.557795    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:03.719834    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:04.041896    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:04.683985    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:05.966174    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:08.528224    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
E1002 03:45:13.650309    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-963000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m23.019529208s)
--- PASS: TestJSONOutput/start/Command (83.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.27s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-963000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.27s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.2s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-963000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.20s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-963000 --output=json --user=testUser
E1002 03:45:23.892611    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-963000 --output=json --user=testUser: (12.074478084s)
--- PASS: TestJSONOutput/stop/Command (12.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.36s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-192000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-192000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (111.34475ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cc2b46e4-11ab-47b6-aab1-e5f0db0f2581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-192000] minikube v1.31.2 on Darwin 14.0 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"77488ea0-e2c2-4f08-9d64-5b7873a53537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"73e41c74-6dba-4e88-8c0e-f534bdde5568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig"}}
	{"specversion":"1.0","id":"92362801-84bc-4aee-ac06-ebeb4a6830df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"415e7f8f-66bd-4a13-ae2e-1a5d6ca6a619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"924a94bd-2512-4117-8a48-23d2f91ad9a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube"}}
	{"specversion":"1.0","id":"8cc2c278-fc98-4062-8f98-eb2d36689a54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"476249ed-c61f-41b2-984a-27db44996344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-192000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-192000
--- PASS: TestErrorJSONOutput (0.36s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-004000 --driver=qemu2 
E1002 03:45:44.374525    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-004000 --driver=qemu2 : (29.715047917s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-005000 --driver=qemu2 
E1002 03:46:25.334481    1409 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-994/.minikube/profiles/functional-680000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-005000 --driver=qemu2 : (31.150793708s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-004000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-005000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-005000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-005000
helpers_test.go:175: Cleaning up "first-004000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-004000
--- PASS: TestMinikubeProfile (61.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-264000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.828334ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-264000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-994/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-994/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-264000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-264000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.533084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-264000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-264000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-264000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-264000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (39.183041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-264000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-805000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-805000 -n old-k8s-version-805000: exit status 7 (28.94575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-805000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-049000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-049000 -n no-preload-049000: exit status 7 (29.235375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-049000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-344000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-344000 -n embed-certs-344000: exit status 7 (28.82525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-344000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-061000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-061000 -n default-k8s-diff-port-061000: exit status 7 (27.933292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-061000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-191000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-191000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-191000 -n newest-cni-191000: exit status 7 (27.408125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-191000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1: exit status 1 (84.630459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3: exit status 1 (58.0455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3: exit status 1 (57.571417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3: exit status 1 (55.792083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3: exit status 1 (57.712417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3: exit status 1 (81.618459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-680000 ssh "findmnt -T" /mount3: exit status 1 (58.915666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-680000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2085953808/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.17s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-547000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-547000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-547000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-547000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547000"

                                                
                                                
----------------------- debugLogs end: cilium-547000 [took: 2.101954292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-547000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-547000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-592000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-592000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard