Test Report: QEMU_macOS 17345

                    
                      57fac428b5f480c5d5720c0006970cf71a80e13d:2023-10-03:31284
                    
                

Test fail (89/256)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.49
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.85
24 TestAddons/parallel/Registry 720.86
25 TestAddons/parallel/Ingress 0.81
26 TestAddons/parallel/InspektorGadget 480.86
38 TestCertOptions 10.2
39 TestCertExpiration 195.33
40 TestDockerFlags 9.92
41 TestForceSystemdFlag 10.01
42 TestForceSystemdEnv 10.62
87 TestFunctional/parallel/ServiceCmdConnect 41.27
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.18
154 TestImageBuild/serial/BuildWithBuildArg 1.08
163 TestIngressAddonLegacy/serial/ValidateIngressAddons 52.8
198 TestMountStart/serial/StartWithMountFirst 10.38
201 TestMultiNode/serial/FreshStart2Nodes 9.77
202 TestMultiNode/serial/DeployApp2Nodes 90.56
203 TestMultiNode/serial/PingHostFrom2Pods 0.08
204 TestMultiNode/serial/AddNode 0.07
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.13
208 TestMultiNode/serial/StartAfterStop 0.1
209 TestMultiNode/serial/RestartKeepsNodes 5.37
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 0.15
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.1
217 TestPreload 9.96
219 TestScheduledStopUnix 10.09
220 TestSkaffold 12.18
223 TestRunningBinaryUpgrade 172.55
225 TestKubernetesUpgrade 15.19
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.34
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.58
240 TestStoppedBinaryUpgrade/Setup 139.54
242 TestPause/serial/Start 9.84
252 TestNoKubernetes/serial/StartWithK8s 9.74
253 TestNoKubernetes/serial/StartWithStopK8s 5.32
254 TestNoKubernetes/serial/Start 5.33
258 TestNoKubernetes/serial/StartNoArgs 5.31
260 TestNetworkPlugins/group/auto/Start 9.78
261 TestNetworkPlugins/group/kindnet/Start 9.82
262 TestNetworkPlugins/group/calico/Start 9.79
263 TestNetworkPlugins/group/custom-flannel/Start 9.91
264 TestNetworkPlugins/group/false/Start 9.72
265 TestNetworkPlugins/group/enable-default-cni/Start 9.91
266 TestNetworkPlugins/group/flannel/Start 9.7
267 TestNetworkPlugins/group/bridge/Start 9.74
268 TestStoppedBinaryUpgrade/Upgrade 3.36
269 TestNetworkPlugins/group/kubenet/Start 9.68
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.76
274 TestStartStop/group/no-preload/serial/FirstStart 11.29
275 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
279 TestStartStop/group/old-k8s-version/serial/SecondStart 7.07
280 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
282 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
283 TestStartStop/group/old-k8s-version/serial/Pause 0.1
285 TestStartStop/group/embed-certs/serial/FirstStart 11.63
286 TestStartStop/group/no-preload/serial/DeployApp 0.1
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
290 TestStartStop/group/no-preload/serial/SecondStart 7.12
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/no-preload/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.26
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/embed-certs/serial/SecondStart 7.06
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/embed-certs/serial/Pause 0.1
307 TestStartStop/group/newest-cni/serial/FirstStart 11.39
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.99
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.25
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (10.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-278000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-278000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.48511975s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d5543b0-2edb-446c-8a7a-a584336a3dcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-278000] minikube v1.31.2 on Darwin 14.0 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b154e03-d8c3-42e0-8e3d-5aab9e9d8328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17345"}}
	{"specversion":"1.0","id":"b828b7a8-d747-4ee6-8b86-a1c67981fe41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig"}}
	{"specversion":"1.0","id":"19329941-c925-4fe1-9a46-81e4c157a5b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"66d012a9-624d-4954-8950-a2747556a2eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"546b9372-c1ee-4f81-b902-dcd31700c492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube"}}
	{"specversion":"1.0","id":"ff242dfe-a0f4-4de8-a71f-f0a0bdd9d61c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"65c19a22-1cd9-4bfd-bcf2-f9e2be1424e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b256091b-d956-4ea8-ae9f-1c1002595a89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"93117cd1-841c-4a3a-a34d-38cd0936cabf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5906827e-3295-48f9-bfa7-ffceb5346494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-278000 in cluster download-only-278000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8969913a-5379-4a98-8925-04f74f58ef2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffef25a4-7ea0-4ad7-98e6-8f729dfdfc83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880] Decompressors:map[bz2:0x14000677640 gz:0x14000677648 tar:0x14000677560 tar.bz2:0x140006775a0 tar.gz:0x140006775b0 tar.xz:0x140006775f0 tar.zst:0x14000677630 tbz2:0x140006775a0 tgz:0x1400067
75b0 txz:0x140006775f0 tzst:0x14000677630 xz:0x14000677650 zip:0x14000677670 zst:0x14000677658] Getters:map[file:0x140006f8900 http:0x1400017e7d0 https:0x1400017e870] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"5ce26765-db16-4e6e-bc14-587847cdf960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:03:20.858615    1449 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:03:20.858796    1449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:20.858799    1449 out.go:309] Setting ErrFile to fd 2...
	I1003 17:03:20.858801    1449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:20.858958    1449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	W1003 17:03:20.859020    1449 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17345-986/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17345-986/.minikube/config/config.json: no such file or directory
	I1003 17:03:20.860141    1449 out.go:303] Setting JSON to true
	I1003 17:03:20.877628    1449 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":174,"bootTime":1696377626,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:03:20.877705    1449 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:03:20.885159    1449 out.go:97] [download-only-278000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:03:20.889091    1449 out.go:169] MINIKUBE_LOCATION=17345
	I1003 17:03:20.885275    1449 notify.go:220] Checking for updates...
	W1003 17:03:20.885303    1449 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 17:03:20.900972    1449 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:03:20.905061    1449 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:03:20.908145    1449 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:03:20.911073    1449 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	W1003 17:03:20.917063    1449 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:03:20.917261    1449 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:03:20.923036    1449 out.go:97] Using the qemu2 driver based on user configuration
	I1003 17:03:20.923042    1449 start.go:298] selected driver: qemu2
	I1003 17:03:20.923056    1449 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:03:20.923123    1449 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:03:20.927054    1449 out.go:169] Automatically selected the socket_vmnet network
	I1003 17:03:20.934034    1449 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1003 17:03:20.934137    1449 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:03:20.934198    1449 cni.go:84] Creating CNI manager for ""
	I1003 17:03:20.934215    1449 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:03:20.934219    1449 start_flags.go:321] config:
	{Name:download-only-278000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-278000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:03:20.940506    1449 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:03:20.945030    1449 out.go:97] Downloading VM boot image ...
	I1003 17:03:20.945060    1449 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I1003 17:03:25.224536    1449 out.go:97] Starting control plane node download-only-278000 in cluster download-only-278000
	I1003 17:03:25.224554    1449 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:03:25.280363    1449 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:03:25.280384    1449 cache.go:57] Caching tarball of preloaded images
	I1003 17:03:25.280517    1449 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:03:25.284733    1449 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1003 17:03:25.284739    1449 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:25.358847    1449 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:03:30.398297    1449 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:30.398467    1449 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:31.040587    1449 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 17:03:31.040782    1449 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/download-only-278000/config.json ...
	I1003 17:03:31.040798    1449 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/download-only-278000/config.json: {Name:mk5649223888d7fca3bc6155a452f90fb2c86f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:03:31.041029    1449 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:03:31.041176    1449 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1003 17:03:31.274286    1449 out.go:169] 
	W1003 17:03:31.278536    1449 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880] Decompressors:map[bz2:0x14000677640 gz:0x14000677648 tar:0x14000677560 tar.bz2:0x140006775a0 tar.gz:0x140006775b0 tar.xz:0x140006775f0 tar.zst:0x14000677630 tbz2:0x140006775a0 tgz:0x140006775b0 txz:0x140006775f0 tzst:0x14000677630 xz:0x14000677650 zip:0x14000677670 zst:0x14000677658] Getters:map[file:0x140006f8900 http:0x1400017e7d0 https:0x1400017e870] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1003 17:03:31.278565    1449 out_reason.go:110] 
	W1003 17:03:31.285624    1449 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:03:31.289414    1449 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-278000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (10.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-974000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-974000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.69255925s)

                                                
                                                
-- stdout --
	* [offline-docker-974000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-974000 in cluster offline-docker-974000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-974000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:34:32.555992    3729 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:34:32.556152    3729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:32.556155    3729 out.go:309] Setting ErrFile to fd 2...
	I1003 17:34:32.556159    3729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:32.556293    3729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:34:32.557419    3729 out.go:303] Setting JSON to false
	I1003 17:34:32.575080    3729 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2046,"bootTime":1696377626,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:34:32.575155    3729 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:34:32.580029    3729 out.go:177] * [offline-docker-974000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:34:32.587153    3729 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:34:32.587286    3729 notify.go:220] Checking for updates...
	I1003 17:34:32.594039    3729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:34:32.597108    3729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:34:32.600027    3729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:34:32.603075    3729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:34:32.606090    3729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:34:32.609519    3729 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:32.609574    3729 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:34:32.612991    3729 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:34:32.618962    3729 start.go:298] selected driver: qemu2
	I1003 17:34:32.618971    3729 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:34:32.618978    3729 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:34:32.620942    3729 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:34:32.624003    3729 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:34:32.627217    3729 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:34:32.627251    3729 cni.go:84] Creating CNI manager for ""
	I1003 17:34:32.627270    3729 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:34:32.627275    3729 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:34:32.627289    3729 start_flags.go:321] config:
	{Name:offline-docker-974000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-974000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:34:32.631843    3729 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:32.639001    3729 out.go:177] * Starting control plane node offline-docker-974000 in cluster offline-docker-974000
	I1003 17:34:32.642996    3729 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:34:32.643020    3729 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:34:32.643032    3729 cache.go:57] Caching tarball of preloaded images
	I1003 17:34:32.643097    3729 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:34:32.643103    3729 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:34:32.643174    3729 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/offline-docker-974000/config.json ...
	I1003 17:34:32.643185    3729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/offline-docker-974000/config.json: {Name:mkf3ecc70b417522915445cb805b6e2cd3d9e876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:34:32.643429    3729 start.go:365] acquiring machines lock for offline-docker-974000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:32.643458    3729 start.go:369] acquired machines lock for "offline-docker-974000" in 22.667µs
	I1003 17:34:32.643473    3729 start.go:93] Provisioning new machine with config: &{Name:offline-docker-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-974000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:32.643507    3729 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:32.648050    3729 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:32.662633    3729 start.go:159] libmachine.API.Create for "offline-docker-974000" (driver="qemu2")
	I1003 17:34:32.662660    3729 client.go:168] LocalClient.Create starting
	I1003 17:34:32.662723    3729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:32.662751    3729 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:32.662764    3729 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:32.662807    3729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:32.662825    3729 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:32.662833    3729 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:32.663148    3729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:32.777599    3729 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:32.857269    3729 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:32.857281    3729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:32.857452    3729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2
	I1003 17:34:32.866716    3729 main.go:141] libmachine: STDOUT: 
	I1003 17:34:32.866735    3729 main.go:141] libmachine: STDERR: 
	I1003 17:34:32.866794    3729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2 +20000M
	I1003 17:34:32.875075    3729 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:32.875092    3729 main.go:141] libmachine: STDERR: 
	I1003 17:34:32.875114    3729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2
	I1003 17:34:32.875125    3729 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:32.875176    3729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:67:20:2d:99:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2
	I1003 17:34:32.877109    3729 main.go:141] libmachine: STDOUT: 
	I1003 17:34:32.877122    3729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:32.877139    3729 client.go:171] LocalClient.Create took 214.475583ms
	I1003 17:34:34.877971    3729 start.go:128] duration metric: createHost completed in 2.234499083s
	I1003 17:34:34.877991    3729 start.go:83] releasing machines lock for "offline-docker-974000", held for 2.234571083s
	W1003 17:34:34.878001    3729 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:34.883134    3729 out.go:177] * Deleting "offline-docker-974000" in qemu2 ...
	W1003 17:34:34.895427    3729 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:34.895434    3729 start.go:703] Will try again in 5 seconds ...
	I1003 17:34:39.897437    3729 start.go:365] acquiring machines lock for offline-docker-974000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:39.897543    3729 start.go:369] acquired machines lock for "offline-docker-974000" in 85.916µs
	I1003 17:34:39.897570    3729 start.go:93] Provisioning new machine with config: &{Name:offline-docker-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-974000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:39.897614    3729 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:39.910340    3729 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:39.927700    3729 start.go:159] libmachine.API.Create for "offline-docker-974000" (driver="qemu2")
	I1003 17:34:39.927737    3729 client.go:168] LocalClient.Create starting
	I1003 17:34:39.927815    3729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:39.927851    3729 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:39.927862    3729 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:39.927902    3729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:39.927924    3729 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:39.927931    3729 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:39.928641    3729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:40.045398    3729 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:40.161546    3729 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:40.161556    3729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:40.161733    3729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2
	I1003 17:34:40.170974    3729 main.go:141] libmachine: STDOUT: 
	I1003 17:34:40.170991    3729 main.go:141] libmachine: STDERR: 
	I1003 17:34:40.171047    3729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2 +20000M
	I1003 17:34:40.179000    3729 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:40.179024    3729 main.go:141] libmachine: STDERR: 
	I1003 17:34:40.179044    3729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2
	I1003 17:34:40.179051    3729 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:40.179085    3729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:84:45:18:eb:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/offline-docker-974000/disk.qcow2
	I1003 17:34:40.180835    3729 main.go:141] libmachine: STDOUT: 
	I1003 17:34:40.180849    3729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:40.180867    3729 client.go:171] LocalClient.Create took 253.1285ms
	I1003 17:34:42.182967    3729 start.go:128] duration metric: createHost completed in 2.28537875s
	I1003 17:34:42.183014    3729 start.go:83] releasing machines lock for "offline-docker-974000", held for 2.285508416s
	W1003 17:34:42.183175    3729 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-974000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-974000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:42.190733    3729 out.go:177] 
	W1003 17:34:42.194859    3729 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:34:42.194896    3729 out.go:239] * 
	* 
	W1003 17:34:42.197249    3729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:34:42.207726    3729 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-974000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-10-03 17:34:42.220873 -0700 PDT m=+1881.443586459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-974000 -n offline-docker-974000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-974000 -n offline-docker-974000: exit status 7 (63.128125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-974000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-974000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-974000
--- FAIL: TestOffline (9.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:308: registry stabilized in 6m0.001071s
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:310: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:310: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-585000 -n addons-585000
addons_test.go:310: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-10-03 17:22:24.67407 -0700 PDT m=+1143.882360668
addons_test.go:311: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-585000 -n addons-585000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | -p download-only-278000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | -p download-only-278000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| delete  | -p download-only-278000                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| delete  | -p download-only-278000                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-585000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | binary-mirror-585000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49316                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-585000                                                                     | binary-mirror-585000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| start   | -p addons-585000 --wait=true                                                                | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:10 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-585000 ssh cat                                                                       | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:10 PDT | 03 Oct 23 17:10 PDT |
	|         | /opt/local-path-provisioner/pvc-320167fa-02d3-46e8-a116-8a91ec031e73_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-585000 addons disable                                                                | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:10 PDT | 03 Oct 23 17:11 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:11 PDT | 03 Oct 23 17:11 PDT |
	|         | addons-585000                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:11 PDT | 03 Oct 23 17:11 PDT |
	|         | -p addons-585000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:03:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:03:39.158581    1527 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:03:39.158729    1527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:39.158732    1527 out.go:309] Setting ErrFile to fd 2...
	I1003 17:03:39.158735    1527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:39.158883    1527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:03:39.159993    1527 out.go:303] Setting JSON to false
	I1003 17:03:39.176087    1527 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":193,"bootTime":1696377626,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:03:39.176163    1527 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:03:39.181737    1527 out.go:177] * [addons-585000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:03:39.192759    1527 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:03:39.188852    1527 notify.go:220] Checking for updates...
	I1003 17:03:39.199748    1527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:03:39.202788    1527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:03:39.205793    1527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:03:39.208724    1527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:03:39.211745    1527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:03:39.214972    1527 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:03:39.217703    1527 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:03:39.224760    1527 start.go:298] selected driver: qemu2
	I1003 17:03:39.224769    1527 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:03:39.224776    1527 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:03:39.227233    1527 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:03:39.228551    1527 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:03:39.231862    1527 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:03:39.231889    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:03:39.231898    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:03:39.231909    1527 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:03:39.231915    1527 start_flags.go:321] config:
	{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I1003 17:03:39.236467    1527 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:03:39.244723    1527 out.go:177] * Starting control plane node addons-585000 in cluster addons-585000
	I1003 17:03:39.248720    1527 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:39.248732    1527 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:03:39.248743    1527 cache.go:57] Caching tarball of preloaded images
	I1003 17:03:39.248794    1527 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:03:39.248799    1527 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:03:39.249009    1527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json ...
	I1003 17:03:39.249019    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json: {Name:mkd778f466258ed6668af8388431c37d54563e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:03:39.249218    1527 start.go:365] acquiring machines lock for addons-585000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:03:39.249347    1527 start.go:369] acquired machines lock for "addons-585000" in 123.542µs
	I1003 17:03:39.249358    1527 start.go:93] Provisioning new machine with config: &{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:03:39.249387    1527 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:03:39.257735    1527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1003 17:03:40.011044    1527 start.go:159] libmachine.API.Create for "addons-585000" (driver="qemu2")
	I1003 17:03:40.011102    1527 client.go:168] LocalClient.Create starting
	I1003 17:03:40.011343    1527 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:03:40.143344    1527 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:03:40.289873    1527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:03:40.457885    1527 main.go:141] libmachine: Creating SSH key...
	I1003 17:03:40.594285    1527 main.go:141] libmachine: Creating Disk image...
	I1003 17:03:40.594296    1527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:03:40.594516    1527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.676774    1527 main.go:141] libmachine: STDOUT: 
	I1003 17:03:40.676808    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.676887    1527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2 +20000M
	I1003 17:03:40.686768    1527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:03:40.686791    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.686809    1527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.686818    1527 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:03:40.686868    1527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:68:9c:60:58:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.738577    1527 main.go:141] libmachine: STDOUT: 
	I1003 17:03:40.738615    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.738620    1527 main.go:141] libmachine: Attempt 0
	I1003 17:03:40.738639    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:42.739795    1527 main.go:141] libmachine: Attempt 1
	I1003 17:03:42.739877    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:44.740254    1527 main.go:141] libmachine: Attempt 2
	I1003 17:03:44.740355    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:46.741375    1527 main.go:141] libmachine: Attempt 3
	I1003 17:03:46.741387    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:48.742425    1527 main.go:141] libmachine: Attempt 4
	I1003 17:03:48.742472    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:50.743507    1527 main.go:141] libmachine: Attempt 5
	I1003 17:03:50.743527    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:52.744581    1527 main.go:141] libmachine: Attempt 6
	I1003 17:03:52.744624    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:52.744780    1527 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1003 17:03:52.744831    1527 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651dfd67}
	I1003 17:03:52.744840    1527 main.go:141] libmachine: Found match: 56:68:9c:60:58:22
	I1003 17:03:52.744854    1527 main.go:141] libmachine: IP: 192.168.105.2
	I1003 17:03:52.744861    1527 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1003 17:03:53.749773    1527 machine.go:88] provisioning docker machine ...
	I1003 17:03:53.749795    1527 buildroot.go:166] provisioning hostname "addons-585000"
	I1003 17:03:53.750715    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:53.750982    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:53.750988    1527 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-585000 && echo "addons-585000" | sudo tee /etc/hostname
	I1003 17:03:53.772634    1527 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1003 17:03:56.876175    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-585000
	
	I1003 17:03:56.876314    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:56.876804    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:56.876820    1527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-585000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-585000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-585000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:03:56.953972    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:03:56.953997    1527 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17345-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17345-986/.minikube}
	I1003 17:03:56.954022    1527 buildroot.go:174] setting up certificates
	I1003 17:03:56.954030    1527 provision.go:83] configureAuth start
	I1003 17:03:56.954037    1527 provision.go:138] copyHostCerts
	I1003 17:03:56.954176    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem (1082 bytes)
	I1003 17:03:56.954543    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem (1123 bytes)
	I1003 17:03:56.954713    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem (1679 bytes)
	I1003 17:03:56.954859    1527 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem org=jenkins.addons-585000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-585000]
	I1003 17:03:57.033117    1527 provision.go:172] copyRemoteCerts
	I1003 17:03:57.033177    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:03:57.033191    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.066294    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:03:57.072924    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1003 17:03:57.080102    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 17:03:57.087350    1527 provision.go:86] duration metric: configureAuth took 133.318167ms
	I1003 17:03:57.087358    1527 buildroot.go:189] setting minikube options for container-runtime
	I1003 17:03:57.087453    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:03:57.087487    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.087704    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.087709    1527 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 17:03:57.150120    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 17:03:57.150127    1527 buildroot.go:70] root file system type: tmpfs
	I1003 17:03:57.150191    1527 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 17:03:57.150229    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.150472    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.150508    1527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 17:03:57.217705    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 17:03:57.217760    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.218004    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.218015    1527 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 17:03:57.578682    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 17:03:57.578700    1527 machine.go:91] provisioned docker machine in 3.829016209s
	I1003 17:03:57.578705    1527 client.go:171] LocalClient.Create took 17.568053208s
	I1003 17:03:57.578718    1527 start.go:167] duration metric: libmachine.API.Create for "addons-585000" took 17.568154958s
	I1003 17:03:57.578727    1527 start.go:300] post-start starting for "addons-585000" (driver="qemu2")
	I1003 17:03:57.578734    1527 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:03:57.578804    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:03:57.578814    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.611980    1527 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:03:57.613271    1527 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 17:03:57.613284    1527 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/addons for local assets ...
	I1003 17:03:57.613351    1527 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/files for local assets ...
	I1003 17:03:57.613375    1527 start.go:303] post-start completed in 34.643875ms
	I1003 17:03:57.613878    1527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json ...
	I1003 17:03:57.614052    1527 start.go:128] duration metric: createHost completed in 18.365139125s
	I1003 17:03:57.614072    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.614285    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.614290    1527 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1003 17:03:57.674033    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696377837.538666711
	
	I1003 17:03:57.674041    1527 fix.go:206] guest clock: 1696377837.538666711
	I1003 17:03:57.674045    1527 fix.go:219] Guest: 2023-10-03 17:03:57.538666711 -0700 PDT Remote: 2023-10-03 17:03:57.614055 -0700 PDT m=+18.473932959 (delta=-75.388289ms)
	I1003 17:03:57.674058    1527 fix.go:190] guest clock delta is within tolerance: -75.388289ms
	I1003 17:03:57.674061    1527 start.go:83] releasing machines lock for "addons-585000", held for 18.42518925s
	I1003 17:03:57.674383    1527 ssh_runner.go:195] Run: cat /version.json
	I1003 17:03:57.674394    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.674401    1527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:03:57.674438    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.752962    1527 ssh_runner.go:195] Run: systemctl --version
	I1003 17:03:57.755435    1527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:03:57.757647    1527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:03:57.757682    1527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:03:57.763566    1527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:03:57.763573    1527 start.go:469] detecting cgroup driver to use...
	I1003 17:03:57.763688    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:03:57.769448    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1003 17:03:57.772711    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 17:03:57.776091    1527 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 17:03:57.776116    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 17:03:57.779455    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:03:57.782467    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 17:03:57.785325    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:03:57.788823    1527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:03:57.792297    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 17:03:57.795615    1527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:03:57.798463    1527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:03:57.801289    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:57.870368    1527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 17:03:57.876293    1527 start.go:469] detecting cgroup driver to use...
	I1003 17:03:57.876338    1527 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 17:03:57.883607    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:03:57.888285    1527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:03:57.899852    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:03:57.904354    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:03:57.909268    1527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 17:03:57.948483    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:03:57.953584    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:03:57.958727    1527 ssh_runner.go:195] Run: which cri-dockerd
	I1003 17:03:57.959920    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 17:03:57.962534    1527 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 17:03:57.967042    1527 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 17:03:58.045502    1527 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 17:03:58.130551    1527 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 17:03:58.130610    1527 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 17:03:58.135707    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:58.217318    1527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:03:59.375606    1527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158300667s)
	I1003 17:03:59.375671    1527 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:03:59.450200    1527 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 17:03:59.524947    1527 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:03:59.593104    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:59.661107    1527 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 17:03:59.668567    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:59.749258    1527 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1003 17:03:59.773399    1527 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 17:03:59.773477    1527 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 17:03:59.775487    1527 start.go:537] Will wait 60s for crictl version
	I1003 17:03:59.775515    1527 ssh_runner.go:195] Run: which crictl
	I1003 17:03:59.776779    1527 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 17:03:59.798500    1527 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1003 17:03:59.798566    1527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:03:59.808398    1527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:03:59.824846    1527 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1003 17:03:59.824981    1527 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1003 17:03:59.826582    1527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:03:59.830378    1527 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:59.830418    1527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:03:59.836052    1527 docker.go:664] Got preloaded images: 
	I1003 17:03:59.836060    1527 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1003 17:03:59.836097    1527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:03:59.839267    1527 ssh_runner.go:195] Run: which lz4
	I1003 17:03:59.840501    1527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1003 17:03:59.841889    1527 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 17:03:59.841901    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I1003 17:04:01.160665    1527 docker.go:628] Took 1.320191 seconds to copy over tarball
	I1003 17:04:01.160727    1527 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 17:04:02.196965    1527 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036251292s)
	I1003 17:04:02.196980    1527 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 17:04:02.212861    1527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:04:02.216574    1527 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1003 17:04:02.221618    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:04:02.297119    1527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:04:03.756211    1527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.459112709s)
	I1003 17:04:03.756322    1527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:04:03.768480    1527 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 17:04:03.768491    1527 cache_images.go:84] Images are preloaded, skipping loading
	I1003 17:04:03.768560    1527 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 17:04:03.778188    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:04:03.778197    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:04:03.778215    1527 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 17:04:03.778224    1527 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-585000 NodeName:addons-585000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:04:03.778291    1527 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-585000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:04:03.778325    1527 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-585000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 17:04:03.778383    1527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1003 17:04:03.781266    1527 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:04:03.781296    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:04:03.784487    1527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1003 17:04:03.789671    1527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:04:03.794606    1527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1003 17:04:03.799383    1527 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I1003 17:04:03.800642    1527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:04:03.804557    1527 certs.go:56] Setting up /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000 for IP: 192.168.105.2
	I1003 17:04:03.804566    1527 certs.go:190] acquiring lock for shared ca certs: {Name:mk60f926c1ccb065a30406b60af36acc708e601e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.804722    1527 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key
	I1003 17:04:03.876701    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt ...
	I1003 17:04:03.876706    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt: {Name:mk0cc174d1dbd071293e805ad6149c7ec4b142e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.876904    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key ...
	I1003 17:04:03.876908    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key: {Name:mk5b0f090e1e87c9db61f19ee029eeb4bf325f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.877012    1527 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key
	I1003 17:04:03.972780    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt ...
	I1003 17:04:03.972784    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt: {Name:mk86baa625f8f131b96564e73e4ff47f159af5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.972918    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key ...
	I1003 17:04:03.972921    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key: {Name:mk9131c9bbe858f22b10b784ddbb510d37a1be7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.973043    1527 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key
	I1003 17:04:03.973049    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt with IP's: []
	I1003 17:04:04.093588    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt ...
	I1003 17:04:04.093595    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: {Name:mka78906c9a5365a7e95b92135f4b70302d9ca1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.093800    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key ...
	I1003 17:04:04.093804    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key: {Name:mk6f93cb157b068e90dc54f48279212367ea5933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.093915    1527 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969
	I1003 17:04:04.093926    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 17:04:04.305627    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 ...
	I1003 17:04:04.305631    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969: {Name:mk46c7ebd409ecd36224a01eb936cac8f04632ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.305820    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969 ...
	I1003 17:04:04.305826    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969: {Name:mk401a87dc7fd50281b18296e80430af94b0e1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.305953    1527 certs.go:337] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt
	I1003 17:04:04.306054    1527 certs.go:341] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key
	I1003 17:04:04.306143    1527 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key
	I1003 17:04:04.306160    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt with IP's: []
	I1003 17:04:04.424122    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt ...
	I1003 17:04:04.424130    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt: {Name:mkdf8b0eea5ab20c208335ea1ea4eff82b50060d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.424327    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key ...
	I1003 17:04:04.424330    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key: {Name:mkfa742ebb9620853aeccd91e597f5c286ba74ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.424537    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:04:04.424561    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem (1082 bytes)
	I1003 17:04:04.424578    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:04:04.424595    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem (1679 bytes)
	I1003 17:04:04.424928    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 17:04:04.432625    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 17:04:04.440097    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:04:04.447625    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 17:04:04.454640    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:04:04.461322    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 17:04:04.468544    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:04:04.475781    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 17:04:04.482585    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:04:04.489167    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:04:04.495146    1527 ssh_runner.go:195] Run: openssl version
	I1003 17:04:04.497003    1527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:04:04.500457    1527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.502228    1527 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.502254    1527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.504052    1527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:04:04.507353    1527 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 17:04:04.508658    1527 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 17:04:04.508694    1527 kubeadm.go:404] StartCluster: {Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:04:04.508756    1527 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 17:04:04.518400    1527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:04:04.521199    1527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:04:04.524450    1527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:04:04.527574    1527 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:04:04.527588    1527 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 17:04:04.548358    1527 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1003 17:04:04.548382    1527 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 17:04:04.607082    1527 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:04:04.607142    1527 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:04:04.607188    1527 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 17:04:04.714297    1527 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:04:04.722481    1527 out.go:204]   - Generating certificates and keys ...
	I1003 17:04:04.722514    1527 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 17:04:04.722542    1527 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 17:04:04.759296    1527 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:04:04.808856    1527 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:04:04.964910    1527 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:04:05.087030    1527 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 17:04:05.149596    1527 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 17:04:05.149646    1527 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-585000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1003 17:04:05.222012    1527 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 17:04:05.222068    1527 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-585000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1003 17:04:05.286742    1527 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:04:05.330145    1527 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:04:05.629678    1527 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 17:04:05.629709    1527 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:04:05.731900    1527 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:04:05.854397    1527 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:04:05.983877    1527 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:04:06.151732    1527 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:04:06.152571    1527 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:04:06.153646    1527 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:04:06.157945    1527 out.go:204]   - Booting up control plane ...
	I1003 17:04:06.158024    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:04:06.158075    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:04:06.158108    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:04:06.161462    1527 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:04:06.161792    1527 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:04:06.161837    1527 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 17:04:06.249226    1527 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 17:04:09.750516    1527 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501291 seconds
	I1003 17:04:09.750582    1527 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 17:04:09.756399    1527 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 17:04:10.266794    1527 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 17:04:10.266918    1527 kubeadm.go:322] [mark-control-plane] Marking the node addons-585000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 17:04:10.770745    1527 kubeadm.go:322] [bootstrap-token] Using token: uzkazy.ii0fjdqhazr4xlxp
	I1003 17:04:10.779414    1527 out.go:204]   - Configuring RBAC rules ...
	I1003 17:04:10.779490    1527 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 17:04:10.779537    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 17:04:10.781300    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 17:04:10.782670    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 17:04:10.783591    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 17:04:10.784726    1527 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 17:04:10.788735    1527 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 17:04:10.960644    1527 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1003 17:04:11.179744    1527 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1003 17:04:11.180034    1527 kubeadm.go:322] 
	I1003 17:04:11.180066    1527 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1003 17:04:11.180069    1527 kubeadm.go:322] 
	I1003 17:04:11.180099    1527 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1003 17:04:11.180102    1527 kubeadm.go:322] 
	I1003 17:04:11.180113    1527 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1003 17:04:11.180141    1527 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 17:04:11.180162    1527 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 17:04:11.180170    1527 kubeadm.go:322] 
	I1003 17:04:11.180201    1527 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1003 17:04:11.180204    1527 kubeadm.go:322] 
	I1003 17:04:11.180235    1527 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 17:04:11.180239    1527 kubeadm.go:322] 
	I1003 17:04:11.180270    1527 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1003 17:04:11.180305    1527 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 17:04:11.180339    1527 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 17:04:11.180343    1527 kubeadm.go:322] 
	I1003 17:04:11.180390    1527 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 17:04:11.180428    1527 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1003 17:04:11.180432    1527 kubeadm.go:322] 
	I1003 17:04:11.180481    1527 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uzkazy.ii0fjdqhazr4xlxp \
	I1003 17:04:11.180530    1527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 \
	I1003 17:04:11.180544    1527 kubeadm.go:322] 	--control-plane 
	I1003 17:04:11.180546    1527 kubeadm.go:322] 
	I1003 17:04:11.180583    1527 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1003 17:04:11.180585    1527 kubeadm.go:322] 
	I1003 17:04:11.180630    1527 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uzkazy.ii0fjdqhazr4xlxp \
	I1003 17:04:11.180681    1527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 
	I1003 17:04:11.180859    1527 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:04:11.180869    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:04:11.180879    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:04:11.187987    1527 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 17:04:11.191059    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 17:04:11.194034    1527 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1003 17:04:11.198752    1527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 17:04:11.198795    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.198820    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a minikube.k8s.io/name=addons-585000 minikube.k8s.io/updated_at=2023_10_03T17_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.257808    1527 ops.go:34] apiserver oom_adj: -16
	I1003 17:04:11.257825    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.294996    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.828345    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:12.328361    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:12.828330    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:13.328397    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:13.828294    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:14.328299    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:14.828282    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:15.328311    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:15.828265    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:16.328253    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:16.828279    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:17.328231    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:17.828247    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:18.328230    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:18.828192    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:19.328127    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:19.828168    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:20.328197    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:20.827992    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:21.327973    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:21.828182    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:22.328181    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:22.828070    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:23.328028    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:23.828030    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:24.327999    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:24.363217    1527 kubeadm.go:1081] duration metric: took 13.164797125s to wait for elevateKubeSystemPrivileges.
	I1003 17:04:24.363234    1527 kubeadm.go:406] StartCluster complete in 19.855065708s
	I1003 17:04:24.363243    1527 settings.go:142] acquiring lock: {Name:mkad5f21e92defa14247d9a0adf05a6e4272cec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:24.363390    1527 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:04:24.363569    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/kubeconfig: {Name:mke3e06a6a2057954076f4772b87ca4469721c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:24.363810    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 17:04:24.363870    1527 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1003 17:04:24.363932    1527 addons.go:69] Setting volumesnapshots=true in profile "addons-585000"
	I1003 17:04:24.363942    1527 addons.go:231] Setting addon volumesnapshots=true in "addons-585000"
	I1003 17:04:24.363950    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:04:24.363971    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363950    1527 addons.go:69] Setting ingress-dns=true in profile "addons-585000"
	I1003 17:04:24.364021    1527 addons.go:231] Setting addon ingress-dns=true in "addons-585000"
	I1003 17:04:24.364047    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363981    1527 addons.go:69] Setting registry=true in profile "addons-585000"
	I1003 17:04:24.364069    1527 addons.go:231] Setting addon registry=true in "addons-585000"
	I1003 17:04:24.364086    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363984    1527 addons.go:69] Setting inspektor-gadget=true in profile "addons-585000"
	I1003 17:04:24.364126    1527 addons.go:231] Setting addon inspektor-gadget=true in "addons-585000"
	I1003 17:04:24.364190    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363986    1527 addons.go:69] Setting metrics-server=true in profile "addons-585000"
	I1003 17:04:24.364214    1527 addons.go:231] Setting addon metrics-server=true in "addons-585000"
	I1003 17:04:24.364247    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363988    1527 addons.go:69] Setting gcp-auth=true in profile "addons-585000"
	I1003 17:04:24.364271    1527 mustload.go:65] Loading cluster: addons-585000
	W1003 17:04:24.364287    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.364300    1527 addons.go:277] "addons-585000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W1003 17:04:24.364313    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.364319    1527 addons.go:277] "addons-585000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I1003 17:04:24.364322    1527 addons.go:467] Verifying addon registry=true in "addons-585000"
	I1003 17:04:24.370457    1527 out.go:177] * Verifying registry addon...
	I1003 17:04:24.364356    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:04:24.363992    1527 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-585000"
	I1003 17:04:24.370476    1527 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-585000"
	I1003 17:04:24.363993    1527 addons.go:69] Setting ingress=true in profile "addons-585000"
	I1003 17:04:24.363995    1527 addons.go:69] Setting cloud-spanner=true in profile "addons-585000"
	I1003 17:04:24.370530    1527 addons.go:231] Setting addon cloud-spanner=true in "addons-585000"
	I1003 17:04:24.363994    1527 addons.go:69] Setting default-storageclass=true in profile "addons-585000"
	I1003 17:04:24.370529    1527 addons.go:231] Setting addon ingress=true in "addons-585000"
	I1003 17:04:24.370591    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.370597    1527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-585000"
	I1003 17:04:24.363996    1527 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-585000"
	W1003 17:04:24.364679    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.370563    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363989    1527 addons.go:69] Setting storage-provisioner=true in profile "addons-585000"
	I1003 17:04:24.370642    1527 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-585000"
	W1003 17:04:24.370646    1527 addons.go:277] "addons-585000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W1003 17:04:24.370793    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.370892    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.371573    1527 addons.go:231] Setting addon default-storageclass=true in "addons-585000"
	I1003 17:04:24.371900    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.380387    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1003 17:04:24.383435    1527 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1003 17:04:24.383459    1527 addons.go:231] Setting addon storage-provisioner=true in "addons-585000"
	W1003 17:04:24.383452    1527 addons.go:277] "addons-585000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W1003 17:04:24.383476    1527 addons_storage_classes.go:57] "addons-585000" is not running, writing storage-provisioner-rancher=true to disk and skipping enablement
	I1003 17:04:24.383496    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.383533    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.383999    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1003 17:04:24.386486    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1003 17:04:24.389459    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1003 17:04:24.389462    1527 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-585000"
	I1003 17:04:24.389488    1527 addons.go:467] Verifying addon ingress=true in "addons-585000"
	I1003 17:04:24.389521    1527 host.go:66] Checking if "addons-585000" exists ...
	W1003 17:04:24.389784    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.395446    1527 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1003 17:04:24.395461    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.395467    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1003 17:04:24.395470    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W1003 17:04:24.395486    1527 addons.go:277] "addons-585000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	I1003 17:04:24.398475    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1003 17:04:24.398484    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.398497    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.407159    1527 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 17:04:24.408397    1527 out.go:177] * Verifying ingress addon...
	I1003 17:04:24.414483    1527 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:04:24.418517    1527 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1003 17:04:24.421445    1527 out.go:177]   - Using image docker.io/busybox:stable
	I1003 17:04:24.423490    1527 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-585000" context rescaled to 1 replicas
	I1003 17:04:24.424496    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1003 17:04:24.427548    1527 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:04:24.427858    1527 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1003 17:04:24.430982    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 17:04:24.433444    1527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:04:24.439439    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1003 17:04:24.439555    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.442455    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 17:04:24.442466    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.447401    1527 out.go:177] * Verifying Kubernetes components...
	I1003 17:04:24.451674    1527 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1003 17:04:24.453485    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:04:24.457496    1527 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1003 17:04:24.467442    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1003 17:04:24.480410    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1003 17:04:24.476494    1527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:04:24.483451    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1003 17:04:24.486383    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1003 17:04:24.483462    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.485254    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1003 17:04:24.489434    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1003 17:04:24.492499    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1003 17:04:24.499438    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1003 17:04:24.508438    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1003 17:04:24.512295    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1003 17:04:24.512323    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1003 17:04:24.512325    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1003 17:04:24.512338    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1003 17:04:24.512347    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.558122    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:04:24.567387    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1003 17:04:24.567399    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1003 17:04:24.572776    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1003 17:04:24.599074    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1003 17:04:24.599085    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1003 17:04:24.675063    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:04:24.675075    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1003 17:04:24.676328    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1003 17:04:24.676333    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1003 17:04:24.739366    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1003 17:04:24.739377    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1003 17:04:24.749052    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:04:24.770490    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1003 17:04:24.770501    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1003 17:04:24.785371    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:04:24.881017    1527 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:24.881027    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1003 17:04:24.903016    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1003 17:04:24.903028    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1003 17:04:24.969674    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1003 17:04:24.969686    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1003 17:04:25.049120    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:25.150729    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1003 17:04:25.150740    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1003 17:04:25.240313    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1003 17:04:25.240326    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1003 17:04:25.371181    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1003 17:04:25.371193    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1003 17:04:25.404463    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1003 17:04:25.404477    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1003 17:04:25.434986    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1003 17:04:25.434995    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1003 17:04:25.450155    1527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.007696167s)
	I1003 17:04:25.450172    1527 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1003 17:04:25.450599    1527 node_ready.go:35] waiting up to 6m0s for node "addons-585000" to be "Ready" ...
	I1003 17:04:25.452207    1527 node_ready.go:49] node "addons-585000" has status "Ready":"True"
	I1003 17:04:25.452226    1527 node_ready.go:38] duration metric: took 1.601791ms waiting for node "addons-585000" to be "Ready" ...
	I1003 17:04:25.452231    1527 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:04:25.455004    1527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace to be "Ready" ...
	I1003 17:04:25.474770    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1003 17:04:25.474780    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1003 17:04:25.542278    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:04:25.542289    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1003 17:04:25.568058    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:04:25.900055    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.327297041s)
	I1003 17:04:25.900073    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.151039167s)
	I1003 17:04:25.900620    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.342523s)
	I1003 17:04:26.031761    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246402458s)
	I1003 17:04:26.031779    1527 addons.go:467] Verifying addon metrics-server=true in "addons-585000"
	W1003 17:04:26.031811    1527 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:04:26.031889    1527 retry.go:31] will retry after 325.291799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:04:26.358314    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:27.197455    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.629412791s)
	I1003 17:04:27.197475    1527 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-585000"
	I1003 17:04:27.203259    1527 out.go:177] * Verifying csi-hostpath-driver addon...
	I1003 17:04:27.212640    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1003 17:04:27.217488    1527 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 17:04:27.217496    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:27.220905    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:27.463742    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:27.724717    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.224875    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.724679    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.978938    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.620667959s)
	I1003 17:04:29.224602    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:29.725503    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:29.965298    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:30.225992    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:30.727405    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:30.990463    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1003 17:04:30.990480    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:31.027829    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1003 17:04:31.032721    1527 addons.go:231] Setting addon gcp-auth=true in "addons-585000"
	I1003 17:04:31.032748    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:31.033437    1527 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1003 17:04:31.033445    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:31.067911    1527 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1003 17:04:31.071726    1527 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1003 17:04:31.074751    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1003 17:04:31.074757    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1003 17:04:31.080083    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1003 17:04:31.080090    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1003 17:04:31.084923    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:04:31.084929    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1003 17:04:31.089910    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:04:31.227750    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:31.348093    1527 addons.go:467] Verifying addon gcp-auth=true in "addons-585000"
	I1003 17:04:31.351542    1527 out.go:177] * Verifying gcp-auth addon...
	I1003 17:04:31.357561    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1003 17:04:31.360910    1527 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1003 17:04:31.360918    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:31.363860    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:31.729484    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:31.868680    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:32.231013    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:32.368635    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:32.469496    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:32.731026    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:32.869106    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:33.231972    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:33.370303    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:33.732757    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:33.870865    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.233653    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:34.371764    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.734158    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:34.872420    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.973595    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:35.235439    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:35.373126    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:35.736125    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:35.873867    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:36.236732    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:36.377065    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:36.738038    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:36.875114    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:37.237689    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:37.375767    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:37.476370    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:37.738419    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:37.876233    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:38.238661    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:38.376555    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:38.739112    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:38.877451    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:39.239515    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:39.378140    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:39.479966    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:39.740793    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:39.878589    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:40.240762    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:40.379208    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:40.741780    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:40.879681    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.241996    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:41.380297    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.742785    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:41.880492    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.981384    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:42.243296    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:42.381074    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:42.743302    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:42.883349    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:43.244024    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:43.381895    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:43.744586    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:43.882405    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:44.244990    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:44.382844    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:44.483439    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:44.745332    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:44.884458    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:45.245668    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:45.383767    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:45.746153    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:45.884389    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:46.246346    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:46.384557    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:46.486038    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:46.746583    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:46.885125    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:47.247216    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:47.385576    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:47.747761    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:47.885573    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.247887    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:48.385997    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.748143    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:48.886329    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.987206    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:49.248369    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:49.386625    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:49.749134    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:49.886585    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.249103    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:50.386927    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.749495    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:50.887744    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.988394    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:51.249792    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:51.387776    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:51.749720    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:51.887719    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:52.249981    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:52.389544    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:52.751121    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:52.888557    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:53.250511    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:53.388659    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:53.490543    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:53.750650    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:53.888553    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:54.250803    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:54.389104    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:54.751294    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:54.889472    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.253309    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:55.389552    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.751715    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:55.889618    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.990684    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:56.251672    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:56.389851    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:56.752031    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:56.889910    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.252259    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:57.390082    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.752574    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:57.890363    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.991811    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:58.252678    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:58.390362    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:58.752590    1527 kapi.go:107] duration metric: took 31.511793709s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1003 17:04:58.890958    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:59.391228    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:59.891535    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:00.391427    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:00.491918    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:05:00.891593    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:01.391491    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:01.891778    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:02.391830    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:02.492542    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:05:02.891986    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:03.393570    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:03.892644    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.392440    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.892576    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.992969    1527 pod_ready.go:92] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.992976    1527 pod_ready.go:81] duration metric: took 39.508226208s waiting for pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.992981    1527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.993883    1527 pod_ready.go:97] error getting pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r24k8" not found
	I1003 17:05:04.993891    1527 pod_ready.go:81] duration metric: took 907.208µs waiting for pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace to be "Ready" ...
	E1003 17:05:04.993895    1527 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r24k8" not found
	I1003 17:05:04.993899    1527 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.996401    1527 pod_ready.go:92] pod "etcd-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.996406    1527 pod_ready.go:81] duration metric: took 2.500458ms waiting for pod "etcd-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.996410    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.999071    1527 pod_ready.go:92] pod "kube-apiserver-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.999079    1527 pod_ready.go:81] duration metric: took 2.666208ms waiting for pod "kube-apiserver-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.999082    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.001717    1527 pod_ready.go:92] pod "kube-controller-manager-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.001725    1527 pod_ready.go:81] duration metric: took 2.637584ms waiting for pod "kube-controller-manager-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.001728    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m9nm" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.193135    1527 pod_ready.go:92] pod "kube-proxy-4m9nm" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.193144    1527 pod_ready.go:81] duration metric: took 191.372792ms waiting for pod "kube-proxy-4m9nm" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.193148    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.392551    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:05.593390    1527 pod_ready.go:92] pod "kube-scheduler-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.593399    1527 pod_ready.go:81] duration metric: took 400.165917ms waiting for pod "kube-scheduler-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.593404    1527 pod_ready.go:38] duration metric: took 40.11130675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:05:05.593416    1527 api_server.go:52] waiting for apiserver process to appear ...
	I1003 17:05:05.593483    1527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 17:05:05.598747    1527 api_server.go:72] duration metric: took 41.126444291s to wait for apiserver process to appear ...
	I1003 17:05:05.598757    1527 api_server.go:88] waiting for apiserver healthz status ...
	I1003 17:05:05.598764    1527 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I1003 17:05:05.601839    1527 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I1003 17:05:05.602592    1527 api_server.go:141] control plane version: v1.28.2
	I1003 17:05:05.602598    1527 api_server.go:131] duration metric: took 3.8385ms to wait for apiserver health ...
	I1003 17:05:05.602602    1527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 17:05:05.796281    1527 system_pods.go:59] 13 kube-system pods found
	I1003 17:05:05.796294    1527 system_pods.go:61] "coredns-5dd5756b68-khk2s" [c45559e9-de80-4305-942c-094315d94d47] Running
	I1003 17:05:05.796297    1527 system_pods.go:61] "csi-hostpath-attacher-0" [63177292-e73a-431c-a767-00cf3ce9bce0] Running
	I1003 17:05:05.796299    1527 system_pods.go:61] "csi-hostpath-resizer-0" [8f737860-0999-4db9-9f65-190d64a4cfb4] Running
	I1003 17:05:05.796301    1527 system_pods.go:61] "csi-hostpathplugin-8thxw" [4548034b-9d0c-4d9d-9f9f-839610935d97] Running
	I1003 17:05:05.796303    1527 system_pods.go:61] "etcd-addons-585000" [f1858d65-44e4-469b-8372-a4e1e0a14d48] Running
	I1003 17:05:05.796306    1527 system_pods.go:61] "kube-apiserver-addons-585000" [377e1779-3201-42b4-945b-e2195b3f0a9a] Running
	I1003 17:05:05.796308    1527 system_pods.go:61] "kube-controller-manager-addons-585000" [f9156947-57be-49f5-9a1c-aadaa5bddf0c] Running
	I1003 17:05:05.796310    1527 system_pods.go:61] "kube-proxy-4m9nm" [847e005d-c7e1-4128-9ac6-2fef6730e3e4] Running
	I1003 17:05:05.796312    1527 system_pods.go:61] "kube-scheduler-addons-585000" [5b3e2487-5781-451e-b566-814b86a4eb80] Running
	I1003 17:05:05.796315    1527 system_pods.go:61] "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
	I1003 17:05:05.796317    1527 system_pods.go:61] "snapshot-controller-58dbcc7b99-2gg5z" [3f9fa5f9-6948-4b90-9114-e5bdb6822c87] Running
	I1003 17:05:05.796319    1527 system_pods.go:61] "snapshot-controller-58dbcc7b99-t2wrg" [174abadb-36b9-4fab-b1e2-44fa5de2def5] Running
	I1003 17:05:05.796321    1527 system_pods.go:61] "storage-provisioner" [41c3bd79-1e45-45d3-a1d8-9f5a2c5c7da5] Running
	I1003 17:05:05.796324    1527 system_pods.go:74] duration metric: took 193.680291ms to wait for pod list to return data ...
	I1003 17:05:05.796329    1527 default_sa.go:34] waiting for default service account to be created ...
	I1003 17:05:05.892491    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:05.992623    1527 default_sa.go:45] found service account: "default"
	I1003 17:05:05.992632    1527 default_sa.go:55] duration metric: took 196.262458ms for default service account to be created ...
	I1003 17:05:05.992638    1527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 17:05:06.196272    1527 system_pods.go:86] 13 kube-system pods found
	I1003 17:05:06.196281    1527 system_pods.go:89] "coredns-5dd5756b68-khk2s" [c45559e9-de80-4305-942c-094315d94d47] Running
	I1003 17:05:06.196284    1527 system_pods.go:89] "csi-hostpath-attacher-0" [63177292-e73a-431c-a767-00cf3ce9bce0] Running
	I1003 17:05:06.196286    1527 system_pods.go:89] "csi-hostpath-resizer-0" [8f737860-0999-4db9-9f65-190d64a4cfb4] Running
	I1003 17:05:06.196288    1527 system_pods.go:89] "csi-hostpathplugin-8thxw" [4548034b-9d0c-4d9d-9f9f-839610935d97] Running
	I1003 17:05:06.196290    1527 system_pods.go:89] "etcd-addons-585000" [f1858d65-44e4-469b-8372-a4e1e0a14d48] Running
	I1003 17:05:06.196292    1527 system_pods.go:89] "kube-apiserver-addons-585000" [377e1779-3201-42b4-945b-e2195b3f0a9a] Running
	I1003 17:05:06.196294    1527 system_pods.go:89] "kube-controller-manager-addons-585000" [f9156947-57be-49f5-9a1c-aadaa5bddf0c] Running
	I1003 17:05:06.196296    1527 system_pods.go:89] "kube-proxy-4m9nm" [847e005d-c7e1-4128-9ac6-2fef6730e3e4] Running
	I1003 17:05:06.196298    1527 system_pods.go:89] "kube-scheduler-addons-585000" [5b3e2487-5781-451e-b566-814b86a4eb80] Running
	I1003 17:05:06.196300    1527 system_pods.go:89] "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
	I1003 17:05:06.196302    1527 system_pods.go:89] "snapshot-controller-58dbcc7b99-2gg5z" [3f9fa5f9-6948-4b90-9114-e5bdb6822c87] Running
	I1003 17:05:06.196304    1527 system_pods.go:89] "snapshot-controller-58dbcc7b99-t2wrg" [174abadb-36b9-4fab-b1e2-44fa5de2def5] Running
	I1003 17:05:06.196306    1527 system_pods.go:89] "storage-provisioner" [41c3bd79-1e45-45d3-a1d8-9f5a2c5c7da5] Running
	I1003 17:05:06.196310    1527 system_pods.go:126] duration metric: took 203.63025ms to wait for k8s-apps to be running ...
	I1003 17:05:06.196313    1527 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 17:05:06.196369    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:05:06.201921    1527 system_svc.go:56] duration metric: took 5.604333ms WaitForService to wait for kubelet.
	I1003 17:05:06.201929    1527 kubeadm.go:581] duration metric: took 41.729511541s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1003 17:05:06.201940    1527 node_conditions.go:102] verifying NodePressure condition ...
	I1003 17:05:06.392698    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:06.393051    1527 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1003 17:05:06.393087    1527 node_conditions.go:123] node cpu capacity is 2
	I1003 17:05:06.393093    1527 node_conditions.go:105] duration metric: took 191.114375ms to run NodePressure ...
	I1003 17:05:06.393098    1527 start.go:228] waiting for startup goroutines ...
	I1003 17:05:06.892943    1527 kapi.go:107] duration metric: took 35.5093015s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1003 17:05:06.897190    1527 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-585000 cluster.
	I1003 17:05:06.900088    1527 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1003 17:05:06.903114    1527 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1003 17:10:24.417571    1527 kapi.go:107] duration metric: took 6m0.007521708s to wait for kubernetes.io/minikube-addons=registry ...
	W1003 17:10:24.417687    1527 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1003 17:10:24.469744    1527 kapi.go:107] duration metric: took 6m0.015851042s to wait for app.kubernetes.io/name=ingress-nginx ...
	W1003 17:10:24.469781    1527 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I1003 17:10:24.476811    1527 out.go:177] * Enabled addons: ingress-dns, inspektor-gadget, default-storageclass, cloud-spanner, storage-provisioner-rancher, storage-provisioner, metrics-server, volumesnapshots, csi-hostpath-driver, gcp-auth
	I1003 17:10:24.483852    1527 addons.go:502] enable addons completed in 6m0.093974291s: enabled=[ingress-dns inspektor-gadget default-storageclass cloud-spanner storage-provisioner-rancher storage-provisioner metrics-server volumesnapshots csi-hostpath-driver gcp-auth]
	I1003 17:10:24.483865    1527 start.go:233] waiting for cluster config update ...
	I1003 17:10:24.483873    1527 start.go:242] writing updated cluster config ...
	I1003 17:10:24.484152    1527 ssh_runner.go:195] Run: rm -f paused
	I1003 17:10:24.580795    1527 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I1003 17:10:24.588925    1527 out.go:177] * Done! kubectl is now configured to use "addons-585000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-04 00:03:51 UTC, ends at Wed 2023-10-04 00:22:24 UTC. --
	Oct 04 00:12:08 addons-585000 dockerd[1122]: time="2023-10-04T00:12:08.349007007Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.240701963Z" level=info msg="ignoring event" container=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241579343Z" level=info msg="shim disconnected" id=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241698010Z" level=warning msg="cleaning up after shim disconnected" id=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241717594Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.242239805Z" level=info msg="ignoring event" container=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242324306Z" level=info msg="shim disconnected" id=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242883392Z" level=warning msg="cleaning up after shim disconnected" id=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242933309Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323347774Z" level=info msg="shim disconnected" id=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323376233Z" level=warning msg="cleaning up after shim disconnected" id=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323380274Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.323530942Z" level=info msg="ignoring event" container=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.329811145Z" level=info msg="ignoring event" container=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329912979Z" level=info msg="shim disconnected" id=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329940521Z" level=warning msg="cleaning up after shim disconnected" id=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329944687Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1115]: time="2023-10-04T00:12:20.574199852Z" level=info msg="ignoring event" container=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574490520Z" level=info msg="shim disconnected" id=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574591896Z" level=warning msg="cleaning up after shim disconnected" id=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574612355Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1115]: time="2023-10-04T00:12:20.629572097Z" level=info msg="ignoring event" container=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629930724Z" level=info msg="shim disconnected" id=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629958933Z" level=warning msg="cleaning up after shim disconnected" id=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629963350Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fe1ae4df5fc3       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753          10 minutes ago      Running             headlamp                  0                   ef032c077a9bb       headlamp-58b88cff49-pkdpk
	4999c3b7505c2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   17 minutes ago      Running             gcp-auth                  0                   0baad7da81cca       gcp-auth-d4c87556c-fbd9n
	59046ced6b465       ba04bb24b9575                                                                                                  17 minutes ago      Running             storage-provisioner       0                   422eeee32f3c7       storage-provisioner
	1cabeffdf46fd       97e04611ad434                                                                                                  18 minutes ago      Running             coredns                   0                   94eeb18b87283       coredns-5dd5756b68-khk2s
	1056d28082563       7da62c127fc0f                                                                                                  18 minutes ago      Running             kube-proxy                0                   af3e361d0b85e       kube-proxy-4m9nm
	92000aefe5383       9cdd6470f48c8                                                                                                  18 minutes ago      Running             etcd                      0                   95e83160dff6c       etcd-addons-585000
	a8e887332d59e       64fc40cee3716                                                                                                  18 minutes ago      Running             kube-scheduler            0                   61eb9114b1a5a       kube-scheduler-addons-585000
	0cc0200950b6e       30bb499447fe1                                                                                                  18 minutes ago      Running             kube-apiserver            0                   d88b38dfea206       kube-apiserver-addons-585000
	c400229c491d5       89d57b83c1786                                                                                                  18 minutes ago      Running             kube-controller-manager   0                   5520678c190ce       kube-controller-manager-addons-585000
	
	* 
	* ==> coredns [1cabeffdf46f] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51045 - 1814 "HINFO IN 7601180359592532728.73972322534061862. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.004583475s
	[INFO] 10.244.0.13:48220 - 27508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122275s
	[INFO] 10.244.0.13:58104 - 55098 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000039883s
	[INFO] 10.244.0.13:58849 - 26333 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042675s
	[INFO] 10.244.0.13:53313 - 23208 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000023463s
	[INFO] 10.244.0.13:34872 - 37616 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023379s
	[INFO] 10.244.0.13:36929 - 11673 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023963s
	[INFO] 10.244.0.13:46593 - 5724 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001098011s
	[INFO] 10.244.0.13:48407 - 27925 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001132685s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-585000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-585000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a
	                    minikube.k8s.io/name=addons-585000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_03T17_04_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-585000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-585000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:22:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-585000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0684753a9d5543b6bf7bf60f67ba1317
	  System UUID:                0684753a9d5543b6bf7bf60f67ba1317
	  Boot ID:                    b5c3b3eb-78df-44c0-a5f8-68774932e45d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-fbd9n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  headlamp                    headlamp-58b88cff49-pkdpk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-khk2s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-585000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-585000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-585000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-4m9nm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-585000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-585000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-585000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-585000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node addons-585000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node addons-585000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node addons-585000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                kubelet          Node addons-585000 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node addons-585000 event: Registered Node addons-585000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.495022] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043117] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000788] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.182267] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +0.079218] systemd-fstab-generator[500]: Ignoring "noauto" for root device
	[  +0.426961] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.177441] systemd-fstab-generator[742]: Ignoring "noauto" for root device
	[  +0.084046] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +0.086428] systemd-fstab-generator[766]: Ignoring "noauto" for root device
	[  +1.232741] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.076723] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.068940] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.067661] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +0.086560] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[Oct 4 00:04] systemd-fstab-generator[1108]: Ignoring "noauto" for root device
	[  +1.419313] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.528330] systemd-fstab-generator[1477]: Ignoring "noauto" for root device
	[  +4.623780] systemd-fstab-generator[2363]: Ignoring "noauto" for root device
	[ +14.190759] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.496510] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +0.762652] kauditd_printk_skb: 67 callbacks suppressed
	[ +18.036762] kauditd_printk_skb: 10 callbacks suppressed
	[Oct 4 00:10] kauditd_printk_skb: 4 callbacks suppressed
	[Oct 4 00:11] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 4 00:12] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [92000aefe538] <==
	* {"level":"info","ts":"2023-10-04T00:04:07.332412Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-10-04T00:04:07.779564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.780671Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-585000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T00:04:07.780756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:04:07.781263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-10-04T00:04:07.781336Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781426Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:04:07.781791Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.782126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T00:04:07.782156Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T00:04:07.782572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T00:14:07.794073Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1195}
	{"level":"info","ts":"2023-10-04T00:14:07.809078Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1195,"took":"14.718788ms","hash":1561751362}
	{"level":"info","ts":"2023-10-04T00:14:07.809096Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1561751362,"revision":1195,"compact-revision":-1}
	{"level":"info","ts":"2023-10-04T00:19:07.797282Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
	{"level":"info","ts":"2023-10-04T00:19:07.80972Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1852,"took":"12.234463ms","hash":1586820359}
	{"level":"info","ts":"2023-10-04T00:19:07.809734Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1586820359,"revision":1852,"compact-revision":1195}
	
	* 
	* ==> gcp-auth [4999c3b7505c] <==
	* 2023/10/04 00:05:05 GCP Auth Webhook started!
	2023/10/04 00:10:24 Ready to marshal response ...
	2023/10/04 00:10:24 Ready to write response ...
	2023/10/04 00:10:24 Ready to marshal response ...
	2023/10/04 00:10:24 Ready to write response ...
	2023/10/04 00:10:34 Ready to marshal response ...
	2023/10/04 00:10:34 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:37 Ready to marshal response ...
	2023/10/04 00:11:37 Ready to write response ...
	2023/10/04 00:11:58 Ready to marshal response ...
	2023/10/04 00:11:58 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:22:25 up 18 min,  0 users,  load average: 0.34, 0.23, 0.19
	Linux addons-585000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0cc0200950b6] <==
	* E1004 00:10:50.314788       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 00:11:08.313792       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 00:11:22.215599       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.244.116"}
	I1004 00:11:47.633142       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 00:12:08.316300       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.160595       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.160622       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.167272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.167291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.174430       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.174447       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.177283       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.177295       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.179264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.179278       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.184100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.184112       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.189220       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.189231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.192620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.192633       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 00:12:15.177382       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 00:12:15.185019       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 00:12:15.201033       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 00:12:35.289747       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [c400229c491d] <==
	* E1004 00:19:15.038894       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:17.102732       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:17.102747       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:36.954598       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:36.954615       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:05.388788       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:05.388811       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:10.923425       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:10.923443       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:13.934126       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:13.934249       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:56.955323       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:56.955340       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:57.535982       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:57.536056       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:21:09.237002       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:21:09.237024       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:21:41.888297       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:21:41.888319       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:21:46.877545       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:21:46.877569       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:21:53.441539       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:21:53.441621       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:22:24.444869       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:22:24.444891       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [1056d2808256] <==
	* I1004 00:04:24.994967       1 server_others.go:69] "Using iptables proxy"
	I1004 00:04:25.008728       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I1004 00:04:25.054070       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 00:04:25.054092       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 00:04:25.054953       1 server_others.go:152] "Using iptables Proxier"
	I1004 00:04:25.054989       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 00:04:25.055146       1 server.go:846] "Version info" version="v1.28.2"
	I1004 00:04:25.055152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:04:25.056159       1 config.go:188] "Starting service config controller"
	I1004 00:04:25.056167       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 00:04:25.056185       1 config.go:97] "Starting endpoint slice config controller"
	I1004 00:04:25.056188       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 00:04:25.056455       1 config.go:315] "Starting node config controller"
	I1004 00:04:25.056459       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 00:04:25.157056       1 shared_informer.go:318] Caches are synced for node config
	I1004 00:04:25.157076       1 shared_informer.go:318] Caches are synced for service config
	I1004 00:04:25.157088       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a8e887332d59] <==
	* W1004 00:04:08.386414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:04:08.386433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 00:04:08.386500       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:04:08.386521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 00:04:08.386548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:04:08.386567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 00:04:08.386603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:04:08.386625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 00:04:08.386669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:04:08.386696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 00:04:08.386739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:04:08.386765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 00:04:08.386786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:04:08.386793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:04:08.386810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:04:08.386876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:04:09.204848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:04:09.204864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:04:09.297780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 00:04:09.297794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 00:04:09.358667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:04:09.358688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:04:09.389038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:04:09.389085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1004 00:04:09.875156       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:03:51 UTC, ends at Wed 2023-10-04 00:22:25 UTC. --
	Oct 04 00:17:10 addons-585000 kubelet[2369]: E1004 00:17:10.891861    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:17:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:17:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:17:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:18:10 addons-585000 kubelet[2369]: E1004 00:18:10.892149    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:18:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:18:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:18:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:19:10 addons-585000 kubelet[2369]: E1004 00:19:10.892588    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:19:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:19:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:19:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:19:10 addons-585000 kubelet[2369]: W1004 00:19:10.906241    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Oct 04 00:20:10 addons-585000 kubelet[2369]: E1004 00:20:10.891904    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:20:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:20:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:20:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:21:10 addons-585000 kubelet[2369]: E1004 00:21:10.891982    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:21:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:21:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:21:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:22:10 addons-585000 kubelet[2369]: E1004 00:22:10.891863    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:22:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:22:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:22:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [59046ced6b46] <==
	* I1004 00:04:26.966816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:04:26.974630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:04:26.974651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:04:26.979059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:04:26.979221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-585000_78b745e2-b305-4877-9367-ded1aea23542!
	I1004 00:04:26.979729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7fdf1027-c160-41cc-988a-74718f8f9c77", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-585000_78b745e2-b305-4877-9367-ded1aea23542 became leader
	I1004 00:04:27.081357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-585000_78b745e2-b305-4877-9367-ded1aea23542!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-585000 -n addons-585000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-585000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-585000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Non-zero exit: kubectl --context addons-585000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (38.432333ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:186: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-585000 -n addons-585000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | -p download-only-278000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | -p download-only-278000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| delete  | -p download-only-278000                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| delete  | -p download-only-278000                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-585000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | binary-mirror-585000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49316                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-585000                                                                     | binary-mirror-585000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| start   | -p addons-585000 --wait=true                                                                | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:10 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-585000 ssh cat                                                                       | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:10 PDT | 03 Oct 23 17:10 PDT |
	|         | /opt/local-path-provisioner/pvc-320167fa-02d3-46e8-a116-8a91ec031e73_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-585000 addons disable                                                                | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:10 PDT | 03 Oct 23 17:11 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:11 PDT | 03 Oct 23 17:11 PDT |
	|         | addons-585000                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:11 PDT | 03 Oct 23 17:11 PDT |
	|         | -p addons-585000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:03:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:03:39.158581    1527 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:03:39.158729    1527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:39.158732    1527 out.go:309] Setting ErrFile to fd 2...
	I1003 17:03:39.158735    1527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:39.158883    1527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:03:39.159993    1527 out.go:303] Setting JSON to false
	I1003 17:03:39.176087    1527 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":193,"bootTime":1696377626,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:03:39.176163    1527 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:03:39.181737    1527 out.go:177] * [addons-585000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:03:39.192759    1527 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:03:39.188852    1527 notify.go:220] Checking for updates...
	I1003 17:03:39.199748    1527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:03:39.202788    1527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:03:39.205793    1527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:03:39.208724    1527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:03:39.211745    1527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:03:39.214972    1527 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:03:39.217703    1527 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:03:39.224760    1527 start.go:298] selected driver: qemu2
	I1003 17:03:39.224769    1527 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:03:39.224776    1527 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:03:39.227233    1527 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:03:39.228551    1527 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:03:39.231862    1527 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:03:39.231889    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:03:39.231898    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:03:39.231909    1527 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:03:39.231915    1527 start_flags.go:321] config:
	{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I1003 17:03:39.236467    1527 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:03:39.244723    1527 out.go:177] * Starting control plane node addons-585000 in cluster addons-585000
	I1003 17:03:39.248720    1527 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:39.248732    1527 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:03:39.248743    1527 cache.go:57] Caching tarball of preloaded images
	I1003 17:03:39.248794    1527 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:03:39.248799    1527 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:03:39.249009    1527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json ...
	I1003 17:03:39.249019    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json: {Name:mkd778f466258ed6668af8388431c37d54563e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:03:39.249218    1527 start.go:365] acquiring machines lock for addons-585000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:03:39.249347    1527 start.go:369] acquired machines lock for "addons-585000" in 123.542µs
	I1003 17:03:39.249358    1527 start.go:93] Provisioning new machine with config: &{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:03:39.249387    1527 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:03:39.257735    1527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1003 17:03:40.011044    1527 start.go:159] libmachine.API.Create for "addons-585000" (driver="qemu2")
	I1003 17:03:40.011102    1527 client.go:168] LocalClient.Create starting
	I1003 17:03:40.011343    1527 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:03:40.143344    1527 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:03:40.289873    1527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:03:40.457885    1527 main.go:141] libmachine: Creating SSH key...
	I1003 17:03:40.594285    1527 main.go:141] libmachine: Creating Disk image...
	I1003 17:03:40.594296    1527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:03:40.594516    1527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.676774    1527 main.go:141] libmachine: STDOUT: 
	I1003 17:03:40.676808    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.676887    1527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2 +20000M
	I1003 17:03:40.686768    1527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:03:40.686791    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.686809    1527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.686818    1527 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:03:40.686868    1527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:68:9c:60:58:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.738577    1527 main.go:141] libmachine: STDOUT: 
	I1003 17:03:40.738615    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.738620    1527 main.go:141] libmachine: Attempt 0
	I1003 17:03:40.738639    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:42.739795    1527 main.go:141] libmachine: Attempt 1
	I1003 17:03:42.739877    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:44.740254    1527 main.go:141] libmachine: Attempt 2
	I1003 17:03:44.740355    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:46.741375    1527 main.go:141] libmachine: Attempt 3
	I1003 17:03:46.741387    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:48.742425    1527 main.go:141] libmachine: Attempt 4
	I1003 17:03:48.742472    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:50.743507    1527 main.go:141] libmachine: Attempt 5
	I1003 17:03:50.743527    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:52.744581    1527 main.go:141] libmachine: Attempt 6
	I1003 17:03:52.744624    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:52.744780    1527 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1003 17:03:52.744831    1527 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651dfd67}
	I1003 17:03:52.744840    1527 main.go:141] libmachine: Found match: 56:68:9c:60:58:22
	I1003 17:03:52.744854    1527 main.go:141] libmachine: IP: 192.168.105.2
	I1003 17:03:52.744861    1527 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1003 17:03:53.749773    1527 machine.go:88] provisioning docker machine ...
	I1003 17:03:53.749795    1527 buildroot.go:166] provisioning hostname "addons-585000"
	I1003 17:03:53.750715    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:53.750982    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:53.750988    1527 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-585000 && echo "addons-585000" | sudo tee /etc/hostname
	I1003 17:03:53.772634    1527 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1003 17:03:56.876175    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-585000
	
	I1003 17:03:56.876314    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:56.876804    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:56.876820    1527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-585000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-585000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-585000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:03:56.953972    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:03:56.953997    1527 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17345-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17345-986/.minikube}
	I1003 17:03:56.954022    1527 buildroot.go:174] setting up certificates
	I1003 17:03:56.954030    1527 provision.go:83] configureAuth start
	I1003 17:03:56.954037    1527 provision.go:138] copyHostCerts
	I1003 17:03:56.954176    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem (1082 bytes)
	I1003 17:03:56.954543    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem (1123 bytes)
	I1003 17:03:56.954713    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem (1679 bytes)
	I1003 17:03:56.954859    1527 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem org=jenkins.addons-585000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-585000]
	I1003 17:03:57.033117    1527 provision.go:172] copyRemoteCerts
	I1003 17:03:57.033177    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:03:57.033191    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.066294    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:03:57.072924    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1003 17:03:57.080102    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 17:03:57.087350    1527 provision.go:86] duration metric: configureAuth took 133.318167ms
	I1003 17:03:57.087358    1527 buildroot.go:189] setting minikube options for container-runtime
	I1003 17:03:57.087453    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:03:57.087487    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.087704    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.087709    1527 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 17:03:57.150120    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 17:03:57.150127    1527 buildroot.go:70] root file system type: tmpfs
	I1003 17:03:57.150191    1527 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 17:03:57.150229    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.150472    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.150508    1527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 17:03:57.217705    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 17:03:57.217760    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.218004    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.218015    1527 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 17:03:57.578682    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 17:03:57.578700    1527 machine.go:91] provisioned docker machine in 3.829016209s
	I1003 17:03:57.578705    1527 client.go:171] LocalClient.Create took 17.568053208s
	I1003 17:03:57.578718    1527 start.go:167] duration metric: libmachine.API.Create for "addons-585000" took 17.568154958s
	I1003 17:03:57.578727    1527 start.go:300] post-start starting for "addons-585000" (driver="qemu2")
	I1003 17:03:57.578734    1527 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:03:57.578804    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:03:57.578814    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.611980    1527 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:03:57.613271    1527 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 17:03:57.613284    1527 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/addons for local assets ...
	I1003 17:03:57.613351    1527 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/files for local assets ...
	I1003 17:03:57.613375    1527 start.go:303] post-start completed in 34.643875ms
	I1003 17:03:57.613878    1527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json ...
	I1003 17:03:57.614052    1527 start.go:128] duration metric: createHost completed in 18.365139125s
	I1003 17:03:57.614072    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.614285    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.614290    1527 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1003 17:03:57.674033    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696377837.538666711
	
	I1003 17:03:57.674041    1527 fix.go:206] guest clock: 1696377837.538666711
	I1003 17:03:57.674045    1527 fix.go:219] Guest: 2023-10-03 17:03:57.538666711 -0700 PDT Remote: 2023-10-03 17:03:57.614055 -0700 PDT m=+18.473932959 (delta=-75.388289ms)
	I1003 17:03:57.674058    1527 fix.go:190] guest clock delta is within tolerance: -75.388289ms
	I1003 17:03:57.674061    1527 start.go:83] releasing machines lock for "addons-585000", held for 18.42518925s
	I1003 17:03:57.674383    1527 ssh_runner.go:195] Run: cat /version.json
	I1003 17:03:57.674394    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.674401    1527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:03:57.674438    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.752962    1527 ssh_runner.go:195] Run: systemctl --version
	I1003 17:03:57.755435    1527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:03:57.757647    1527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:03:57.757682    1527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:03:57.763566    1527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:03:57.763573    1527 start.go:469] detecting cgroup driver to use...
	I1003 17:03:57.763688    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:03:57.769448    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1003 17:03:57.772711    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 17:03:57.776091    1527 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 17:03:57.776116    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 17:03:57.779455    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:03:57.782467    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 17:03:57.785325    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:03:57.788823    1527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:03:57.792297    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 17:03:57.795615    1527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:03:57.798463    1527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:03:57.801289    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:57.870368    1527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 17:03:57.876293    1527 start.go:469] detecting cgroup driver to use...
	I1003 17:03:57.876338    1527 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 17:03:57.883607    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:03:57.888285    1527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:03:57.899852    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:03:57.904354    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:03:57.909268    1527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 17:03:57.948483    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:03:57.953584    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:03:57.958727    1527 ssh_runner.go:195] Run: which cri-dockerd
	I1003 17:03:57.959920    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 17:03:57.962534    1527 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 17:03:57.967042    1527 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 17:03:58.045502    1527 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 17:03:58.130551    1527 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 17:03:58.130610    1527 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 17:03:58.135707    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:58.217318    1527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:03:59.375606    1527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158300667s)
	I1003 17:03:59.375671    1527 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:03:59.450200    1527 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 17:03:59.524947    1527 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:03:59.593104    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:59.661107    1527 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 17:03:59.668567    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:59.749258    1527 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1003 17:03:59.773399    1527 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 17:03:59.773477    1527 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 17:03:59.775487    1527 start.go:537] Will wait 60s for crictl version
	I1003 17:03:59.775515    1527 ssh_runner.go:195] Run: which crictl
	I1003 17:03:59.776779    1527 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 17:03:59.798500    1527 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1003 17:03:59.798566    1527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:03:59.808398    1527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:03:59.824846    1527 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1003 17:03:59.824981    1527 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1003 17:03:59.826582    1527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:03:59.830378    1527 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:59.830418    1527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:03:59.836052    1527 docker.go:664] Got preloaded images: 
	I1003 17:03:59.836060    1527 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1003 17:03:59.836097    1527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:03:59.839267    1527 ssh_runner.go:195] Run: which lz4
	I1003 17:03:59.840501    1527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1003 17:03:59.841889    1527 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 17:03:59.841901    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I1003 17:04:01.160665    1527 docker.go:628] Took 1.320191 seconds to copy over tarball
	I1003 17:04:01.160727    1527 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 17:04:02.196965    1527 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036251292s)
	I1003 17:04:02.196980    1527 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 17:04:02.212861    1527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:04:02.216574    1527 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1003 17:04:02.221618    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:04:02.297119    1527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:04:03.756211    1527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.459112709s)
	I1003 17:04:03.756322    1527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:04:03.768480    1527 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 17:04:03.768491    1527 cache_images.go:84] Images are preloaded, skipping loading
	I1003 17:04:03.768560    1527 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 17:04:03.778188    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:04:03.778197    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:04:03.778215    1527 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 17:04:03.778224    1527 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-585000 NodeName:addons-585000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:04:03.778291    1527 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-585000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:04:03.778325    1527 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-585000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 17:04:03.778383    1527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1003 17:04:03.781266    1527 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:04:03.781296    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:04:03.784487    1527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1003 17:04:03.789671    1527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:04:03.794606    1527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1003 17:04:03.799383    1527 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I1003 17:04:03.800642    1527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:04:03.804557    1527 certs.go:56] Setting up /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000 for IP: 192.168.105.2
	I1003 17:04:03.804566    1527 certs.go:190] acquiring lock for shared ca certs: {Name:mk60f926c1ccb065a30406b60af36acc708e601e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.804722    1527 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key
	I1003 17:04:03.876701    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt ...
	I1003 17:04:03.876706    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt: {Name:mk0cc174d1dbd071293e805ad6149c7ec4b142e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.876904    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key ...
	I1003 17:04:03.876908    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key: {Name:mk5b0f090e1e87c9db61f19ee029eeb4bf325f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.877012    1527 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key
	I1003 17:04:03.972780    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt ...
	I1003 17:04:03.972784    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt: {Name:mk86baa625f8f131b96564e73e4ff47f159af5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.972918    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key ...
	I1003 17:04:03.972921    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key: {Name:mk9131c9bbe858f22b10b784ddbb510d37a1be7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.973043    1527 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key
	I1003 17:04:03.973049    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt with IP's: []
	I1003 17:04:04.093588    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt ...
	I1003 17:04:04.093595    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: {Name:mka78906c9a5365a7e95b92135f4b70302d9ca1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.093800    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key ...
	I1003 17:04:04.093804    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key: {Name:mk6f93cb157b068e90dc54f48279212367ea5933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.093915    1527 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969
	I1003 17:04:04.093926    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 17:04:04.305627    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 ...
	I1003 17:04:04.305631    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969: {Name:mk46c7ebd409ecd36224a01eb936cac8f04632ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.305820    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969 ...
	I1003 17:04:04.305826    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969: {Name:mk401a87dc7fd50281b18296e80430af94b0e1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.305953    1527 certs.go:337] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt
	I1003 17:04:04.306054    1527 certs.go:341] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key
	I1003 17:04:04.306143    1527 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key
	I1003 17:04:04.306160    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt with IP's: []
	I1003 17:04:04.424122    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt ...
	I1003 17:04:04.424130    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt: {Name:mkdf8b0eea5ab20c208335ea1ea4eff82b50060d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.424327    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key ...
	I1003 17:04:04.424330    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key: {Name:mkfa742ebb9620853aeccd91e597f5c286ba74ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.424537    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:04:04.424561    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem (1082 bytes)
	I1003 17:04:04.424578    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:04:04.424595    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem (1679 bytes)
	I1003 17:04:04.424928    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 17:04:04.432625    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 17:04:04.440097    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:04:04.447625    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 17:04:04.454640    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:04:04.461322    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 17:04:04.468544    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:04:04.475781    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 17:04:04.482585    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:04:04.489167    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:04:04.495146    1527 ssh_runner.go:195] Run: openssl version
	I1003 17:04:04.497003    1527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:04:04.500457    1527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.502228    1527 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.502254    1527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.504052    1527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:04:04.507353    1527 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 17:04:04.508658    1527 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 17:04:04.508694    1527 kubeadm.go:404] StartCluster: {Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:04:04.508756    1527 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 17:04:04.518400    1527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:04:04.521199    1527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:04:04.524450    1527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:04:04.527574    1527 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:04:04.527588    1527 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 17:04:04.548358    1527 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1003 17:04:04.548382    1527 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 17:04:04.607082    1527 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:04:04.607142    1527 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:04:04.607188    1527 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 17:04:04.714297    1527 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:04:04.722481    1527 out.go:204]   - Generating certificates and keys ...
	I1003 17:04:04.722514    1527 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 17:04:04.722542    1527 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 17:04:04.759296    1527 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:04:04.808856    1527 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:04:04.964910    1527 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:04:05.087030    1527 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 17:04:05.149596    1527 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 17:04:05.149646    1527 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-585000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1003 17:04:05.222012    1527 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 17:04:05.222068    1527 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-585000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1003 17:04:05.286742    1527 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:04:05.330145    1527 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:04:05.629678    1527 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 17:04:05.629709    1527 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:04:05.731900    1527 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:04:05.854397    1527 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:04:05.983877    1527 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:04:06.151732    1527 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:04:06.152571    1527 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:04:06.153646    1527 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:04:06.157945    1527 out.go:204]   - Booting up control plane ...
	I1003 17:04:06.158024    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:04:06.158075    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:04:06.158108    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:04:06.161462    1527 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:04:06.161792    1527 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:04:06.161837    1527 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 17:04:06.249226    1527 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 17:04:09.750516    1527 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501291 seconds
	I1003 17:04:09.750582    1527 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 17:04:09.756399    1527 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 17:04:10.266794    1527 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 17:04:10.266918    1527 kubeadm.go:322] [mark-control-plane] Marking the node addons-585000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 17:04:10.770745    1527 kubeadm.go:322] [bootstrap-token] Using token: uzkazy.ii0fjdqhazr4xlxp
	I1003 17:04:10.779414    1527 out.go:204]   - Configuring RBAC rules ...
	I1003 17:04:10.779490    1527 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 17:04:10.779537    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 17:04:10.781300    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 17:04:10.782670    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 17:04:10.783591    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 17:04:10.784726    1527 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 17:04:10.788735    1527 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 17:04:10.960644    1527 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1003 17:04:11.179744    1527 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1003 17:04:11.180034    1527 kubeadm.go:322] 
	I1003 17:04:11.180066    1527 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1003 17:04:11.180069    1527 kubeadm.go:322] 
	I1003 17:04:11.180099    1527 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1003 17:04:11.180102    1527 kubeadm.go:322] 
	I1003 17:04:11.180113    1527 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1003 17:04:11.180141    1527 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 17:04:11.180162    1527 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 17:04:11.180170    1527 kubeadm.go:322] 
	I1003 17:04:11.180201    1527 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1003 17:04:11.180204    1527 kubeadm.go:322] 
	I1003 17:04:11.180235    1527 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 17:04:11.180239    1527 kubeadm.go:322] 
	I1003 17:04:11.180270    1527 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1003 17:04:11.180305    1527 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 17:04:11.180339    1527 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 17:04:11.180343    1527 kubeadm.go:322] 
	I1003 17:04:11.180390    1527 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 17:04:11.180428    1527 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1003 17:04:11.180432    1527 kubeadm.go:322] 
	I1003 17:04:11.180481    1527 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uzkazy.ii0fjdqhazr4xlxp \
	I1003 17:04:11.180530    1527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 \
	I1003 17:04:11.180544    1527 kubeadm.go:322] 	--control-plane 
	I1003 17:04:11.180546    1527 kubeadm.go:322] 
	I1003 17:04:11.180583    1527 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1003 17:04:11.180585    1527 kubeadm.go:322] 
	I1003 17:04:11.180630    1527 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uzkazy.ii0fjdqhazr4xlxp \
	I1003 17:04:11.180681    1527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 
	I1003 17:04:11.180859    1527 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:04:11.180869    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:04:11.180879    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:04:11.187987    1527 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 17:04:11.191059    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 17:04:11.194034    1527 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1003 17:04:11.198752    1527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 17:04:11.198795    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.198820    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a minikube.k8s.io/name=addons-585000 minikube.k8s.io/updated_at=2023_10_03T17_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.257808    1527 ops.go:34] apiserver oom_adj: -16
	I1003 17:04:11.257825    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.294996    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.828345    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:12.328361    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:12.828330    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:13.328397    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:13.828294    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:14.328299    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:14.828282    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:15.328311    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:15.828265    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:16.328253    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:16.828279    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:17.328231    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:17.828247    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:18.328230    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:18.828192    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:19.328127    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:19.828168    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:20.328197    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:20.827992    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:21.327973    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:21.828182    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:22.328181    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:22.828070    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:23.328028    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:23.828030    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:24.327999    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:24.363217    1527 kubeadm.go:1081] duration metric: took 13.164797125s to wait for elevateKubeSystemPrivileges.
	I1003 17:04:24.363234    1527 kubeadm.go:406] StartCluster complete in 19.855065708s
	I1003 17:04:24.363243    1527 settings.go:142] acquiring lock: {Name:mkad5f21e92defa14247d9a0adf05a6e4272cec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:24.363390    1527 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:04:24.363569    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/kubeconfig: {Name:mke3e06a6a2057954076f4772b87ca4469721c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:24.363810    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 17:04:24.363870    1527 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1003 17:04:24.363932    1527 addons.go:69] Setting volumesnapshots=true in profile "addons-585000"
	I1003 17:04:24.363942    1527 addons.go:231] Setting addon volumesnapshots=true in "addons-585000"
	I1003 17:04:24.363950    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:04:24.363971    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363950    1527 addons.go:69] Setting ingress-dns=true in profile "addons-585000"
	I1003 17:04:24.364021    1527 addons.go:231] Setting addon ingress-dns=true in "addons-585000"
	I1003 17:04:24.364047    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363981    1527 addons.go:69] Setting registry=true in profile "addons-585000"
	I1003 17:04:24.364069    1527 addons.go:231] Setting addon registry=true in "addons-585000"
	I1003 17:04:24.364086    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363984    1527 addons.go:69] Setting inspektor-gadget=true in profile "addons-585000"
	I1003 17:04:24.364126    1527 addons.go:231] Setting addon inspektor-gadget=true in "addons-585000"
	I1003 17:04:24.364190    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363986    1527 addons.go:69] Setting metrics-server=true in profile "addons-585000"
	I1003 17:04:24.364214    1527 addons.go:231] Setting addon metrics-server=true in "addons-585000"
	I1003 17:04:24.364247    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363988    1527 addons.go:69] Setting gcp-auth=true in profile "addons-585000"
	I1003 17:04:24.364271    1527 mustload.go:65] Loading cluster: addons-585000
	W1003 17:04:24.364287    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.364300    1527 addons.go:277] "addons-585000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W1003 17:04:24.364313    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.364319    1527 addons.go:277] "addons-585000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I1003 17:04:24.364322    1527 addons.go:467] Verifying addon registry=true in "addons-585000"
	I1003 17:04:24.370457    1527 out.go:177] * Verifying registry addon...
	I1003 17:04:24.364356    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:04:24.363992    1527 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-585000"
	I1003 17:04:24.370476    1527 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-585000"
	I1003 17:04:24.363993    1527 addons.go:69] Setting ingress=true in profile "addons-585000"
	I1003 17:04:24.363995    1527 addons.go:69] Setting cloud-spanner=true in profile "addons-585000"
	I1003 17:04:24.370530    1527 addons.go:231] Setting addon cloud-spanner=true in "addons-585000"
	I1003 17:04:24.363994    1527 addons.go:69] Setting default-storageclass=true in profile "addons-585000"
	I1003 17:04:24.370529    1527 addons.go:231] Setting addon ingress=true in "addons-585000"
	I1003 17:04:24.370591    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.370597    1527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-585000"
	I1003 17:04:24.363996    1527 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-585000"
	W1003 17:04:24.364679    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.370563    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363989    1527 addons.go:69] Setting storage-provisioner=true in profile "addons-585000"
	I1003 17:04:24.370642    1527 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-585000"
	W1003 17:04:24.370646    1527 addons.go:277] "addons-585000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W1003 17:04:24.370793    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.370892    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.371573    1527 addons.go:231] Setting addon default-storageclass=true in "addons-585000"
	I1003 17:04:24.371900    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.380387    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1003 17:04:24.383435    1527 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1003 17:04:24.383459    1527 addons.go:231] Setting addon storage-provisioner=true in "addons-585000"
	W1003 17:04:24.383452    1527 addons.go:277] "addons-585000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W1003 17:04:24.383476    1527 addons_storage_classes.go:57] "addons-585000" is not running, writing storage-provisioner-rancher=true to disk and skipping enablement
	I1003 17:04:24.383496    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.383533    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.383999    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1003 17:04:24.386486    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1003 17:04:24.389459    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1003 17:04:24.389462    1527 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-585000"
	I1003 17:04:24.389488    1527 addons.go:467] Verifying addon ingress=true in "addons-585000"
	I1003 17:04:24.389521    1527 host.go:66] Checking if "addons-585000" exists ...
	W1003 17:04:24.389784    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.395446    1527 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1003 17:04:24.395461    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.395467    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1003 17:04:24.395470    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W1003 17:04:24.395486    1527 addons.go:277] "addons-585000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	I1003 17:04:24.398475    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1003 17:04:24.398484    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.398497    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.407159    1527 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 17:04:24.408397    1527 out.go:177] * Verifying ingress addon...
	I1003 17:04:24.414483    1527 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:04:24.418517    1527 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1003 17:04:24.421445    1527 out.go:177]   - Using image docker.io/busybox:stable
	I1003 17:04:24.423490    1527 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-585000" context rescaled to 1 replicas
	I1003 17:04:24.424496    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1003 17:04:24.427548    1527 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:04:24.427858    1527 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1003 17:04:24.430982    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 17:04:24.433444    1527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:04:24.439439    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1003 17:04:24.439555    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.442455    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 17:04:24.442466    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.447401    1527 out.go:177] * Verifying Kubernetes components...
	I1003 17:04:24.451674    1527 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1003 17:04:24.453485    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:04:24.457496    1527 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1003 17:04:24.467442    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1003 17:04:24.480410    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1003 17:04:24.476494    1527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:04:24.483451    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1003 17:04:24.486383    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1003 17:04:24.483462    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.485254    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1003 17:04:24.489434    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1003 17:04:24.492499    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1003 17:04:24.499438    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1003 17:04:24.508438    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1003 17:04:24.512295    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1003 17:04:24.512323    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1003 17:04:24.512325    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1003 17:04:24.512338    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1003 17:04:24.512347    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.558122    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:04:24.567387    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1003 17:04:24.567399    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1003 17:04:24.572776    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1003 17:04:24.599074    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1003 17:04:24.599085    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1003 17:04:24.675063    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:04:24.675075    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1003 17:04:24.676328    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1003 17:04:24.676333    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1003 17:04:24.739366    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1003 17:04:24.739377    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1003 17:04:24.749052    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:04:24.770490    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1003 17:04:24.770501    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1003 17:04:24.785371    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:04:24.881017    1527 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:24.881027    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1003 17:04:24.903016    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1003 17:04:24.903028    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1003 17:04:24.969674    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1003 17:04:24.969686    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1003 17:04:25.049120    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:25.150729    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1003 17:04:25.150740    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1003 17:04:25.240313    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1003 17:04:25.240326    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1003 17:04:25.371181    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1003 17:04:25.371193    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1003 17:04:25.404463    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1003 17:04:25.404477    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1003 17:04:25.434986    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1003 17:04:25.434995    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1003 17:04:25.450155    1527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.007696167s)
	I1003 17:04:25.450172    1527 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1003 17:04:25.450599    1527 node_ready.go:35] waiting up to 6m0s for node "addons-585000" to be "Ready" ...
	I1003 17:04:25.452207    1527 node_ready.go:49] node "addons-585000" has status "Ready":"True"
	I1003 17:04:25.452226    1527 node_ready.go:38] duration metric: took 1.601791ms waiting for node "addons-585000" to be "Ready" ...
	I1003 17:04:25.452231    1527 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:04:25.455004    1527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace to be "Ready" ...
	I1003 17:04:25.474770    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1003 17:04:25.474780    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1003 17:04:25.542278    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:04:25.542289    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1003 17:04:25.568058    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:04:25.900055    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.327297041s)
	I1003 17:04:25.900073    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.151039167s)
	I1003 17:04:25.900620    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.342523s)
	I1003 17:04:26.031761    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246402458s)
	I1003 17:04:26.031779    1527 addons.go:467] Verifying addon metrics-server=true in "addons-585000"
	W1003 17:04:26.031811    1527 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:04:26.031889    1527 retry.go:31] will retry after 325.291799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:04:26.358314    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:27.197455    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.629412791s)
	I1003 17:04:27.197475    1527 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-585000"
	I1003 17:04:27.203259    1527 out.go:177] * Verifying csi-hostpath-driver addon...
	I1003 17:04:27.212640    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1003 17:04:27.217488    1527 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 17:04:27.217496    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:27.220905    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:27.463742    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:27.724717    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.224875    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.724679    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.978938    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.620667959s)
	I1003 17:04:29.224602    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:29.725503    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:29.965298    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:30.225992    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:30.727405    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:30.990463    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1003 17:04:30.990480    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:31.027829    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1003 17:04:31.032721    1527 addons.go:231] Setting addon gcp-auth=true in "addons-585000"
	I1003 17:04:31.032748    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:31.033437    1527 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1003 17:04:31.033445    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:31.067911    1527 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1003 17:04:31.071726    1527 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1003 17:04:31.074751    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1003 17:04:31.074757    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1003 17:04:31.080083    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1003 17:04:31.080090    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1003 17:04:31.084923    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:04:31.084929    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1003 17:04:31.089910    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:04:31.227750    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:31.348093    1527 addons.go:467] Verifying addon gcp-auth=true in "addons-585000"
	I1003 17:04:31.351542    1527 out.go:177] * Verifying gcp-auth addon...
	I1003 17:04:31.357561    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1003 17:04:31.360910    1527 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1003 17:04:31.360918    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:31.363860    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:31.729484    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:31.868680    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:32.231013    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:32.368635    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:32.469496    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:32.731026    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:32.869106    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:33.231972    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:33.370303    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:33.732757    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:33.870865    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.233653    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:34.371764    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.734158    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:34.872420    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.973595    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:35.235439    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:35.373126    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:35.736125    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:35.873867    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:36.236732    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:36.377065    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:36.738038    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:36.875114    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:37.237689    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:37.375767    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:37.476370    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:37.738419    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:37.876233    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:38.238661    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:38.376555    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:38.739112    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:38.877451    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:39.239515    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:39.378140    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:39.479966    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:39.740793    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:39.878589    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:40.240762    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:40.379208    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:40.741780    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:40.879681    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.241996    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:41.380297    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.742785    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:41.880492    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.981384    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:42.243296    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:42.381074    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:42.743302    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:42.883349    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:43.244024    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:43.381895    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:43.744586    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:43.882405    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:44.244990    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:44.382844    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:44.483439    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:44.745332    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:44.884458    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:45.245668    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:45.383767    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:45.746153    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:45.884389    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:46.246346    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:46.384557    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:46.486038    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:46.746583    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:46.885125    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:47.247216    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:47.385576    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:47.747761    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:47.885573    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.247887    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:48.385997    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.748143    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:48.886329    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.987206    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:49.248369    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:49.386625    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:49.749134    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:49.886585    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.249103    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:50.386927    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.749495    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:50.887744    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.988394    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:51.249792    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:51.387776    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:51.749720    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:51.887719    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:52.249981    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:52.389544    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:52.751121    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:52.888557    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:53.250511    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:53.388659    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:53.490543    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:53.750650    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:53.888553    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:54.250803    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:54.389104    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:54.751294    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:54.889472    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.253309    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:55.389552    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.751715    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:55.889618    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.990684    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:56.251672    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:56.389851    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:56.752031    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:56.889910    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.252259    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:57.390082    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.752574    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:57.890363    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.991811    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:58.252678    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:58.390362    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:58.752590    1527 kapi.go:107] duration metric: took 31.511793709s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1003 17:04:58.890958    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:59.391228    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:59.891535    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:00.391427    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:00.491918    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:05:00.891593    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:01.391491    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:01.891778    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:02.391830    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:02.492542    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:05:02.891986    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:03.393570    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:03.892644    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.392440    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.892576    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.992969    1527 pod_ready.go:92] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.992976    1527 pod_ready.go:81] duration metric: took 39.508226208s waiting for pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.992981    1527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.993883    1527 pod_ready.go:97] error getting pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r24k8" not found
	I1003 17:05:04.993891    1527 pod_ready.go:81] duration metric: took 907.208µs waiting for pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace to be "Ready" ...
	E1003 17:05:04.993895    1527 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r24k8" not found
	I1003 17:05:04.993899    1527 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.996401    1527 pod_ready.go:92] pod "etcd-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.996406    1527 pod_ready.go:81] duration metric: took 2.500458ms waiting for pod "etcd-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.996410    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.999071    1527 pod_ready.go:92] pod "kube-apiserver-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.999079    1527 pod_ready.go:81] duration metric: took 2.666208ms waiting for pod "kube-apiserver-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.999082    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.001717    1527 pod_ready.go:92] pod "kube-controller-manager-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.001725    1527 pod_ready.go:81] duration metric: took 2.637584ms waiting for pod "kube-controller-manager-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.001728    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m9nm" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.193135    1527 pod_ready.go:92] pod "kube-proxy-4m9nm" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.193144    1527 pod_ready.go:81] duration metric: took 191.372792ms waiting for pod "kube-proxy-4m9nm" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.193148    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.392551    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:05.593390    1527 pod_ready.go:92] pod "kube-scheduler-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.593399    1527 pod_ready.go:81] duration metric: took 400.165917ms waiting for pod "kube-scheduler-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.593404    1527 pod_ready.go:38] duration metric: took 40.11130675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:05:05.593416    1527 api_server.go:52] waiting for apiserver process to appear ...
	I1003 17:05:05.593483    1527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 17:05:05.598747    1527 api_server.go:72] duration metric: took 41.126444291s to wait for apiserver process to appear ...
	I1003 17:05:05.598757    1527 api_server.go:88] waiting for apiserver healthz status ...
	I1003 17:05:05.598764    1527 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I1003 17:05:05.601839    1527 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I1003 17:05:05.602592    1527 api_server.go:141] control plane version: v1.28.2
	I1003 17:05:05.602598    1527 api_server.go:131] duration metric: took 3.8385ms to wait for apiserver health ...
	I1003 17:05:05.602602    1527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 17:05:05.796281    1527 system_pods.go:59] 13 kube-system pods found
	I1003 17:05:05.796294    1527 system_pods.go:61] "coredns-5dd5756b68-khk2s" [c45559e9-de80-4305-942c-094315d94d47] Running
	I1003 17:05:05.796297    1527 system_pods.go:61] "csi-hostpath-attacher-0" [63177292-e73a-431c-a767-00cf3ce9bce0] Running
	I1003 17:05:05.796299    1527 system_pods.go:61] "csi-hostpath-resizer-0" [8f737860-0999-4db9-9f65-190d64a4cfb4] Running
	I1003 17:05:05.796301    1527 system_pods.go:61] "csi-hostpathplugin-8thxw" [4548034b-9d0c-4d9d-9f9f-839610935d97] Running
	I1003 17:05:05.796303    1527 system_pods.go:61] "etcd-addons-585000" [f1858d65-44e4-469b-8372-a4e1e0a14d48] Running
	I1003 17:05:05.796306    1527 system_pods.go:61] "kube-apiserver-addons-585000" [377e1779-3201-42b4-945b-e2195b3f0a9a] Running
	I1003 17:05:05.796308    1527 system_pods.go:61] "kube-controller-manager-addons-585000" [f9156947-57be-49f5-9a1c-aadaa5bddf0c] Running
	I1003 17:05:05.796310    1527 system_pods.go:61] "kube-proxy-4m9nm" [847e005d-c7e1-4128-9ac6-2fef6730e3e4] Running
	I1003 17:05:05.796312    1527 system_pods.go:61] "kube-scheduler-addons-585000" [5b3e2487-5781-451e-b566-814b86a4eb80] Running
	I1003 17:05:05.796315    1527 system_pods.go:61] "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
	I1003 17:05:05.796317    1527 system_pods.go:61] "snapshot-controller-58dbcc7b99-2gg5z" [3f9fa5f9-6948-4b90-9114-e5bdb6822c87] Running
	I1003 17:05:05.796319    1527 system_pods.go:61] "snapshot-controller-58dbcc7b99-t2wrg" [174abadb-36b9-4fab-b1e2-44fa5de2def5] Running
	I1003 17:05:05.796321    1527 system_pods.go:61] "storage-provisioner" [41c3bd79-1e45-45d3-a1d8-9f5a2c5c7da5] Running
	I1003 17:05:05.796324    1527 system_pods.go:74] duration metric: took 193.680291ms to wait for pod list to return data ...
	I1003 17:05:05.796329    1527 default_sa.go:34] waiting for default service account to be created ...
	I1003 17:05:05.892491    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:05.992623    1527 default_sa.go:45] found service account: "default"
	I1003 17:05:05.992632    1527 default_sa.go:55] duration metric: took 196.262458ms for default service account to be created ...
	I1003 17:05:05.992638    1527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 17:05:06.196272    1527 system_pods.go:86] 13 kube-system pods found
	I1003 17:05:06.196281    1527 system_pods.go:89] "coredns-5dd5756b68-khk2s" [c45559e9-de80-4305-942c-094315d94d47] Running
	I1003 17:05:06.196284    1527 system_pods.go:89] "csi-hostpath-attacher-0" [63177292-e73a-431c-a767-00cf3ce9bce0] Running
	I1003 17:05:06.196286    1527 system_pods.go:89] "csi-hostpath-resizer-0" [8f737860-0999-4db9-9f65-190d64a4cfb4] Running
	I1003 17:05:06.196288    1527 system_pods.go:89] "csi-hostpathplugin-8thxw" [4548034b-9d0c-4d9d-9f9f-839610935d97] Running
	I1003 17:05:06.196290    1527 system_pods.go:89] "etcd-addons-585000" [f1858d65-44e4-469b-8372-a4e1e0a14d48] Running
	I1003 17:05:06.196292    1527 system_pods.go:89] "kube-apiserver-addons-585000" [377e1779-3201-42b4-945b-e2195b3f0a9a] Running
	I1003 17:05:06.196294    1527 system_pods.go:89] "kube-controller-manager-addons-585000" [f9156947-57be-49f5-9a1c-aadaa5bddf0c] Running
	I1003 17:05:06.196296    1527 system_pods.go:89] "kube-proxy-4m9nm" [847e005d-c7e1-4128-9ac6-2fef6730e3e4] Running
	I1003 17:05:06.196298    1527 system_pods.go:89] "kube-scheduler-addons-585000" [5b3e2487-5781-451e-b566-814b86a4eb80] Running
	I1003 17:05:06.196300    1527 system_pods.go:89] "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
	I1003 17:05:06.196302    1527 system_pods.go:89] "snapshot-controller-58dbcc7b99-2gg5z" [3f9fa5f9-6948-4b90-9114-e5bdb6822c87] Running
	I1003 17:05:06.196304    1527 system_pods.go:89] "snapshot-controller-58dbcc7b99-t2wrg" [174abadb-36b9-4fab-b1e2-44fa5de2def5] Running
	I1003 17:05:06.196306    1527 system_pods.go:89] "storage-provisioner" [41c3bd79-1e45-45d3-a1d8-9f5a2c5c7da5] Running
	I1003 17:05:06.196310    1527 system_pods.go:126] duration metric: took 203.63025ms to wait for k8s-apps to be running ...
	I1003 17:05:06.196313    1527 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 17:05:06.196369    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:05:06.201921    1527 system_svc.go:56] duration metric: took 5.604333ms WaitForService to wait for kubelet.
	I1003 17:05:06.201929    1527 kubeadm.go:581] duration metric: took 41.729511541s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1003 17:05:06.201940    1527 node_conditions.go:102] verifying NodePressure condition ...
	I1003 17:05:06.392698    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:06.393051    1527 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1003 17:05:06.393087    1527 node_conditions.go:123] node cpu capacity is 2
	I1003 17:05:06.393093    1527 node_conditions.go:105] duration metric: took 191.114375ms to run NodePressure ...
	I1003 17:05:06.393098    1527 start.go:228] waiting for startup goroutines ...
	I1003 17:05:06.892943    1527 kapi.go:107] duration metric: took 35.5093015s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1003 17:05:06.897190    1527 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-585000 cluster.
	I1003 17:05:06.900088    1527 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1003 17:05:06.903114    1527 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1003 17:10:24.417571    1527 kapi.go:107] duration metric: took 6m0.007521708s to wait for kubernetes.io/minikube-addons=registry ...
	W1003 17:10:24.417687    1527 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1003 17:10:24.469744    1527 kapi.go:107] duration metric: took 6m0.015851042s to wait for app.kubernetes.io/name=ingress-nginx ...
	W1003 17:10:24.469781    1527 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I1003 17:10:24.476811    1527 out.go:177] * Enabled addons: ingress-dns, inspektor-gadget, default-storageclass, cloud-spanner, storage-provisioner-rancher, storage-provisioner, metrics-server, volumesnapshots, csi-hostpath-driver, gcp-auth
	I1003 17:10:24.483852    1527 addons.go:502] enable addons completed in 6m0.093974291s: enabled=[ingress-dns inspektor-gadget default-storageclass cloud-spanner storage-provisioner-rancher storage-provisioner metrics-server volumesnapshots csi-hostpath-driver gcp-auth]
	I1003 17:10:24.483865    1527 start.go:233] waiting for cluster config update ...
	I1003 17:10:24.483873    1527 start.go:242] writing updated cluster config ...
	I1003 17:10:24.484152    1527 ssh_runner.go:195] Run: rm -f paused
	I1003 17:10:24.580795    1527 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I1003 17:10:24.588925    1527 out.go:177] * Done! kubectl is now configured to use "addons-585000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-04 00:03:51 UTC, ends at Wed 2023-10-04 00:20:20 UTC. --
	Oct 04 00:12:08 addons-585000 dockerd[1122]: time="2023-10-04T00:12:08.349007007Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.240701963Z" level=info msg="ignoring event" container=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241579343Z" level=info msg="shim disconnected" id=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241698010Z" level=warning msg="cleaning up after shim disconnected" id=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241717594Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.242239805Z" level=info msg="ignoring event" container=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242324306Z" level=info msg="shim disconnected" id=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242883392Z" level=warning msg="cleaning up after shim disconnected" id=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242933309Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323347774Z" level=info msg="shim disconnected" id=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323376233Z" level=warning msg="cleaning up after shim disconnected" id=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323380274Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.323530942Z" level=info msg="ignoring event" container=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.329811145Z" level=info msg="ignoring event" container=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329912979Z" level=info msg="shim disconnected" id=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329940521Z" level=warning msg="cleaning up after shim disconnected" id=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329944687Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1115]: time="2023-10-04T00:12:20.574199852Z" level=info msg="ignoring event" container=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574490520Z" level=info msg="shim disconnected" id=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574591896Z" level=warning msg="cleaning up after shim disconnected" id=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574612355Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1115]: time="2023-10-04T00:12:20.629572097Z" level=info msg="ignoring event" container=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629930724Z" level=info msg="shim disconnected" id=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629958933Z" level=warning msg="cleaning up after shim disconnected" id=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629963350Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fe1ae4df5fc3       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753          8 minutes ago       Running             headlamp                  0                   ef032c077a9bb       headlamp-58b88cff49-pkdpk
	4999c3b7505c2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   15 minutes ago      Running             gcp-auth                  0                   0baad7da81cca       gcp-auth-d4c87556c-fbd9n
	59046ced6b465       ba04bb24b9575                                                                                                  15 minutes ago      Running             storage-provisioner       0                   422eeee32f3c7       storage-provisioner
	1cabeffdf46fd       97e04611ad434                                                                                                  15 minutes ago      Running             coredns                   0                   94eeb18b87283       coredns-5dd5756b68-khk2s
	1056d28082563       7da62c127fc0f                                                                                                  15 minutes ago      Running             kube-proxy                0                   af3e361d0b85e       kube-proxy-4m9nm
	92000aefe5383       9cdd6470f48c8                                                                                                  16 minutes ago      Running             etcd                      0                   95e83160dff6c       etcd-addons-585000
	a8e887332d59e       64fc40cee3716                                                                                                  16 minutes ago      Running             kube-scheduler            0                   61eb9114b1a5a       kube-scheduler-addons-585000
	0cc0200950b6e       30bb499447fe1                                                                                                  16 minutes ago      Running             kube-apiserver            0                   d88b38dfea206       kube-apiserver-addons-585000
	c400229c491d5       89d57b83c1786                                                                                                  16 minutes ago      Running             kube-controller-manager   0                   5520678c190ce       kube-controller-manager-addons-585000
	
	* 
	* ==> coredns [1cabeffdf46f] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51045 - 1814 "HINFO IN 7601180359592532728.73972322534061862. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.004583475s
	[INFO] 10.244.0.13:48220 - 27508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122275s
	[INFO] 10.244.0.13:58104 - 55098 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000039883s
	[INFO] 10.244.0.13:58849 - 26333 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042675s
	[INFO] 10.244.0.13:53313 - 23208 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000023463s
	[INFO] 10.244.0.13:34872 - 37616 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023379s
	[INFO] 10.244.0.13:36929 - 11673 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023963s
	[INFO] 10.244.0.13:46593 - 5724 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001098011s
	[INFO] 10.244.0.13:48407 - 27925 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001132685s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-585000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-585000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a
	                    minikube.k8s.io/name=addons-585000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_03T17_04_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-585000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-585000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:20:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-585000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0684753a9d5543b6bf7bf60f67ba1317
	  System UUID:                0684753a9d5543b6bf7bf60f67ba1317
	  Boot ID:                    b5c3b3eb-78df-44c0-a5f8-68774932e45d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-fbd9n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  headlamp                    headlamp-58b88cff49-pkdpk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 coredns-5dd5756b68-khk2s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     15m
	  kube-system                 etcd-addons-585000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-addons-585000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-addons-585000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4m9nm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-addons-585000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-585000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-585000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-585000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node addons-585000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node addons-585000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node addons-585000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m                kubelet          Node addons-585000 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-585000 event: Registered Node addons-585000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.495022] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043117] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000788] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.182267] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +0.079218] systemd-fstab-generator[500]: Ignoring "noauto" for root device
	[  +0.426961] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.177441] systemd-fstab-generator[742]: Ignoring "noauto" for root device
	[  +0.084046] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +0.086428] systemd-fstab-generator[766]: Ignoring "noauto" for root device
	[  +1.232741] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.076723] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.068940] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.067661] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +0.086560] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[Oct 4 00:04] systemd-fstab-generator[1108]: Ignoring "noauto" for root device
	[  +1.419313] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.528330] systemd-fstab-generator[1477]: Ignoring "noauto" for root device
	[  +4.623780] systemd-fstab-generator[2363]: Ignoring "noauto" for root device
	[ +14.190759] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.496510] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +0.762652] kauditd_printk_skb: 67 callbacks suppressed
	[ +18.036762] kauditd_printk_skb: 10 callbacks suppressed
	[Oct 4 00:10] kauditd_printk_skb: 4 callbacks suppressed
	[Oct 4 00:11] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 4 00:12] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [92000aefe538] <==
	* {"level":"info","ts":"2023-10-04T00:04:07.332412Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-10-04T00:04:07.779564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.780671Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-585000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T00:04:07.780756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:04:07.781263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-10-04T00:04:07.781336Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781426Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:04:07.781791Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.782126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T00:04:07.782156Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T00:04:07.782572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T00:14:07.794073Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1195}
	{"level":"info","ts":"2023-10-04T00:14:07.809078Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1195,"took":"14.718788ms","hash":1561751362}
	{"level":"info","ts":"2023-10-04T00:14:07.809096Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1561751362,"revision":1195,"compact-revision":-1}
	{"level":"info","ts":"2023-10-04T00:19:07.797282Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
	{"level":"info","ts":"2023-10-04T00:19:07.80972Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1852,"took":"12.234463ms","hash":1586820359}
	{"level":"info","ts":"2023-10-04T00:19:07.809734Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1586820359,"revision":1852,"compact-revision":1195}
	
	* 
	* ==> gcp-auth [4999c3b7505c] <==
	* 2023/10/04 00:05:05 GCP Auth Webhook started!
	2023/10/04 00:10:24 Ready to marshal response ...
	2023/10/04 00:10:24 Ready to write response ...
	2023/10/04 00:10:24 Ready to marshal response ...
	2023/10/04 00:10:24 Ready to write response ...
	2023/10/04 00:10:34 Ready to marshal response ...
	2023/10/04 00:10:34 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:37 Ready to marshal response ...
	2023/10/04 00:11:37 Ready to write response ...
	2023/10/04 00:11:58 Ready to marshal response ...
	2023/10/04 00:11:58 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:20:20 up 16 min,  0 users,  load average: 0.15, 0.20, 0.18
	Linux addons-585000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0cc0200950b6] <==
	* E1004 00:10:50.314788       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 00:11:08.313792       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 00:11:22.215599       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.244.116"}
	I1004 00:11:47.633142       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 00:12:08.316300       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.160595       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.160622       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.167272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.167291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.174430       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.174447       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.177283       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.177295       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.179264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.179278       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.184100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.184112       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.189220       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.189231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.192620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.192633       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 00:12:15.177382       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 00:12:15.185019       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 00:12:15.201033       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 00:12:35.289747       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [c400229c491d] <==
	* E1004 00:17:06.657567       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:17:30.734478       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:17:30.734496       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:17:43.686942       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:17:43.686957       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:17:47.974020       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:17:47.974036       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:18:20.227624       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:18:20.227649       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:18:22.080967       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:18:22.080986       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:18:42.876081       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:18:42.876101       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:15.038875       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:15.038894       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:17.102732       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:17.102747       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:36.954598       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:36.954615       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:05.388788       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:05.388811       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:10.923425       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:10.923443       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:13.934126       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:13.934249       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [1056d2808256] <==
	* I1004 00:04:24.994967       1 server_others.go:69] "Using iptables proxy"
	I1004 00:04:25.008728       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I1004 00:04:25.054070       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 00:04:25.054092       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 00:04:25.054953       1 server_others.go:152] "Using iptables Proxier"
	I1004 00:04:25.054989       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 00:04:25.055146       1 server.go:846] "Version info" version="v1.28.2"
	I1004 00:04:25.055152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:04:25.056159       1 config.go:188] "Starting service config controller"
	I1004 00:04:25.056167       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 00:04:25.056185       1 config.go:97] "Starting endpoint slice config controller"
	I1004 00:04:25.056188       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 00:04:25.056455       1 config.go:315] "Starting node config controller"
	I1004 00:04:25.056459       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 00:04:25.157056       1 shared_informer.go:318] Caches are synced for node config
	I1004 00:04:25.157076       1 shared_informer.go:318] Caches are synced for service config
	I1004 00:04:25.157088       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a8e887332d59] <==
	* W1004 00:04:08.386414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:04:08.386433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 00:04:08.386500       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:04:08.386521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 00:04:08.386548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:04:08.386567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 00:04:08.386603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:04:08.386625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 00:04:08.386669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:04:08.386696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 00:04:08.386739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:04:08.386765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 00:04:08.386786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:04:08.386793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:04:08.386810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:04:08.386876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:04:09.204848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:04:09.204864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:04:09.297780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 00:04:09.297794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 00:04:09.358667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:04:09.358688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:04:09.389038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:04:09.389085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1004 00:04:09.875156       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:03:51 UTC, ends at Wed 2023-10-04 00:20:20 UTC. --
	Oct 04 00:15:10 addons-585000 kubelet[2369]: E1004 00:15:10.892515    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:15:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:15:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:15:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:16:10 addons-585000 kubelet[2369]: E1004 00:16:10.892289    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:16:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:16:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:16:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:17:10 addons-585000 kubelet[2369]: E1004 00:17:10.891861    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:17:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:17:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:17:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:18:10 addons-585000 kubelet[2369]: E1004 00:18:10.892149    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:18:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:18:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:18:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:19:10 addons-585000 kubelet[2369]: E1004 00:19:10.892588    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:19:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:19:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:19:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:19:10 addons-585000 kubelet[2369]: W1004 00:19:10.906241    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Oct 04 00:20:10 addons-585000 kubelet[2369]: E1004 00:20:10.891904    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:20:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:20:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:20:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [59046ced6b46] <==
	* I1004 00:04:26.966816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:04:26.974630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:04:26.974651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:04:26.979059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:04:26.979221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-585000_78b745e2-b305-4877-9367-ded1aea23542!
	I1004 00:04:26.979729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7fdf1027-c160-41cc-988a-74718f8f9c77", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-585000_78b745e2-b305-4877-9367-ded1aea23542 became leader
	I1004 00:04:27.081357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-585000_78b745e2-b305-4877-9367-ded1aea23542!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-585000 -n addons-585000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-585000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:329: TestAddons/parallel/InspektorGadget: WARNING: pod list for "gadget" "k8s-app=gadget" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:816: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:816: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-585000 -n addons-585000
addons_test.go:816: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-10-03 17:20:19.569699 -0700 PDT m=+1018.775554459
addons_test.go:817: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-585000 -n addons-585000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | -p download-only-278000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | -p download-only-278000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| delete  | -p download-only-278000                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| delete  | -p download-only-278000                                                                     | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-585000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |                     |
	|         | binary-mirror-585000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49316                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-585000                                                                     | binary-mirror-585000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:03 PDT |
	| start   | -p addons-585000 --wait=true                                                                | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT | 03 Oct 23 17:10 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-585000 ssh cat                                                                       | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:10 PDT | 03 Oct 23 17:10 PDT |
	|         | /opt/local-path-provisioner/pvc-320167fa-02d3-46e8-a116-8a91ec031e73_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-585000 addons disable                                                                | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:10 PDT | 03 Oct 23 17:11 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:11 PDT | 03 Oct 23 17:11 PDT |
	|         | addons-585000                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:11 PDT | 03 Oct 23 17:11 PDT |
	|         | -p addons-585000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-585000 addons                                                                        | addons-585000        | jenkins | v1.31.2 | 03 Oct 23 17:12 PDT | 03 Oct 23 17:12 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:03:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:03:39.158581    1527 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:03:39.158729    1527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:39.158732    1527 out.go:309] Setting ErrFile to fd 2...
	I1003 17:03:39.158735    1527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:39.158883    1527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:03:39.159993    1527 out.go:303] Setting JSON to false
	I1003 17:03:39.176087    1527 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":193,"bootTime":1696377626,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:03:39.176163    1527 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:03:39.181737    1527 out.go:177] * [addons-585000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:03:39.192759    1527 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:03:39.188852    1527 notify.go:220] Checking for updates...
	I1003 17:03:39.199748    1527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:03:39.202788    1527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:03:39.205793    1527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:03:39.208724    1527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:03:39.211745    1527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:03:39.214972    1527 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:03:39.217703    1527 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:03:39.224760    1527 start.go:298] selected driver: qemu2
	I1003 17:03:39.224769    1527 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:03:39.224776    1527 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:03:39.227233    1527 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:03:39.228551    1527 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:03:39.231862    1527 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:03:39.231889    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:03:39.231898    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:03:39.231909    1527 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:03:39.231915    1527 start_flags.go:321] config:
	{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I1003 17:03:39.236467    1527 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:03:39.244723    1527 out.go:177] * Starting control plane node addons-585000 in cluster addons-585000
	I1003 17:03:39.248720    1527 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:39.248732    1527 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:03:39.248743    1527 cache.go:57] Caching tarball of preloaded images
	I1003 17:03:39.248794    1527 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:03:39.248799    1527 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:03:39.249009    1527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json ...
	I1003 17:03:39.249019    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json: {Name:mkd778f466258ed6668af8388431c37d54563e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:03:39.249218    1527 start.go:365] acquiring machines lock for addons-585000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:03:39.249347    1527 start.go:369] acquired machines lock for "addons-585000" in 123.542µs
	I1003 17:03:39.249358    1527 start.go:93] Provisioning new machine with config: &{Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:03:39.249387    1527 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:03:39.257735    1527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1003 17:03:40.011044    1527 start.go:159] libmachine.API.Create for "addons-585000" (driver="qemu2")
	I1003 17:03:40.011102    1527 client.go:168] LocalClient.Create starting
	I1003 17:03:40.011343    1527 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:03:40.143344    1527 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:03:40.289873    1527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:03:40.457885    1527 main.go:141] libmachine: Creating SSH key...
	I1003 17:03:40.594285    1527 main.go:141] libmachine: Creating Disk image...
	I1003 17:03:40.594296    1527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:03:40.594516    1527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.676774    1527 main.go:141] libmachine: STDOUT: 
	I1003 17:03:40.676808    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.676887    1527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2 +20000M
	I1003 17:03:40.686768    1527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:03:40.686791    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.686809    1527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.686818    1527 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:03:40.686868    1527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:68:9c:60:58:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/disk.qcow2
	I1003 17:03:40.738577    1527 main.go:141] libmachine: STDOUT: 
	I1003 17:03:40.738615    1527 main.go:141] libmachine: STDERR: 
	I1003 17:03:40.738620    1527 main.go:141] libmachine: Attempt 0
	I1003 17:03:40.738639    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:42.739795    1527 main.go:141] libmachine: Attempt 1
	I1003 17:03:42.739877    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:44.740254    1527 main.go:141] libmachine: Attempt 2
	I1003 17:03:44.740355    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:46.741375    1527 main.go:141] libmachine: Attempt 3
	I1003 17:03:46.741387    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:48.742425    1527 main.go:141] libmachine: Attempt 4
	I1003 17:03:48.742472    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:50.743507    1527 main.go:141] libmachine: Attempt 5
	I1003 17:03:50.743527    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:52.744581    1527 main.go:141] libmachine: Attempt 6
	I1003 17:03:52.744624    1527 main.go:141] libmachine: Searching for 56:68:9c:60:58:22 in /var/db/dhcpd_leases ...
	I1003 17:03:52.744780    1527 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1003 17:03:52.744831    1527 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651dfd67}
	I1003 17:03:52.744840    1527 main.go:141] libmachine: Found match: 56:68:9c:60:58:22
	I1003 17:03:52.744854    1527 main.go:141] libmachine: IP: 192.168.105.2
	I1003 17:03:52.744861    1527 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1003 17:03:53.749773    1527 machine.go:88] provisioning docker machine ...
	I1003 17:03:53.749795    1527 buildroot.go:166] provisioning hostname "addons-585000"
	I1003 17:03:53.750715    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:53.750982    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:53.750988    1527 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-585000 && echo "addons-585000" | sudo tee /etc/hostname
	I1003 17:03:53.772634    1527 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1003 17:03:56.876175    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-585000
	
	I1003 17:03:56.876314    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:56.876804    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:56.876820    1527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-585000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-585000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-585000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:03:56.953972    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:03:56.953997    1527 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17345-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17345-986/.minikube}
	I1003 17:03:56.954022    1527 buildroot.go:174] setting up certificates
	I1003 17:03:56.954030    1527 provision.go:83] configureAuth start
	I1003 17:03:56.954037    1527 provision.go:138] copyHostCerts
	I1003 17:03:56.954176    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem (1082 bytes)
	I1003 17:03:56.954543    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem (1123 bytes)
	I1003 17:03:56.954713    1527 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem (1679 bytes)
	I1003 17:03:56.954859    1527 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem org=jenkins.addons-585000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-585000]
	I1003 17:03:57.033117    1527 provision.go:172] copyRemoteCerts
	I1003 17:03:57.033177    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:03:57.033191    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.066294    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:03:57.072924    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1003 17:03:57.080102    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 17:03:57.087350    1527 provision.go:86] duration metric: configureAuth took 133.318167ms
	I1003 17:03:57.087358    1527 buildroot.go:189] setting minikube options for container-runtime
	I1003 17:03:57.087453    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:03:57.087487    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.087704    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.087709    1527 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 17:03:57.150120    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 17:03:57.150127    1527 buildroot.go:70] root file system type: tmpfs
	I1003 17:03:57.150191    1527 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 17:03:57.150229    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.150472    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.150508    1527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 17:03:57.217705    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 17:03:57.217760    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.218004    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.218015    1527 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 17:03:57.578682    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 17:03:57.578700    1527 machine.go:91] provisioned docker machine in 3.829016209s
	I1003 17:03:57.578705    1527 client.go:171] LocalClient.Create took 17.568053208s
	I1003 17:03:57.578718    1527 start.go:167] duration metric: libmachine.API.Create for "addons-585000" took 17.568154958s
	I1003 17:03:57.578727    1527 start.go:300] post-start starting for "addons-585000" (driver="qemu2")
	I1003 17:03:57.578734    1527 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:03:57.578804    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:03:57.578814    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.611980    1527 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:03:57.613271    1527 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 17:03:57.613284    1527 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/addons for local assets ...
	I1003 17:03:57.613351    1527 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/files for local assets ...
	I1003 17:03:57.613375    1527 start.go:303] post-start completed in 34.643875ms
	I1003 17:03:57.613878    1527 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/config.json ...
	I1003 17:03:57.614052    1527 start.go:128] duration metric: createHost completed in 18.365139125s
	I1003 17:03:57.614072    1527 main.go:141] libmachine: Using SSH client type: native
	I1003 17:03:57.614285    1527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10290de60] 0x1029105d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1003 17:03:57.614290    1527 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1003 17:03:57.674033    1527 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696377837.538666711
	
	I1003 17:03:57.674041    1527 fix.go:206] guest clock: 1696377837.538666711
	I1003 17:03:57.674045    1527 fix.go:219] Guest: 2023-10-03 17:03:57.538666711 -0700 PDT Remote: 2023-10-03 17:03:57.614055 -0700 PDT m=+18.473932959 (delta=-75.388289ms)
	I1003 17:03:57.674058    1527 fix.go:190] guest clock delta is within tolerance: -75.388289ms
	I1003 17:03:57.674061    1527 start.go:83] releasing machines lock for "addons-585000", held for 18.42518925s
	I1003 17:03:57.674383    1527 ssh_runner.go:195] Run: cat /version.json
	I1003 17:03:57.674394    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.674401    1527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:03:57.674438    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:03:57.752962    1527 ssh_runner.go:195] Run: systemctl --version
	I1003 17:03:57.755435    1527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:03:57.757647    1527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:03:57.757682    1527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:03:57.763566    1527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:03:57.763573    1527 start.go:469] detecting cgroup driver to use...
	I1003 17:03:57.763688    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:03:57.769448    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1003 17:03:57.772711    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 17:03:57.776091    1527 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 17:03:57.776116    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 17:03:57.779455    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:03:57.782467    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 17:03:57.785325    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:03:57.788823    1527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:03:57.792297    1527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 17:03:57.795615    1527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:03:57.798463    1527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:03:57.801289    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:57.870368    1527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 17:03:57.876293    1527 start.go:469] detecting cgroup driver to use...
	I1003 17:03:57.876338    1527 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 17:03:57.883607    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:03:57.888285    1527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:03:57.899852    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:03:57.904354    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:03:57.909268    1527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 17:03:57.948483    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:03:57.953584    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:03:57.958727    1527 ssh_runner.go:195] Run: which cri-dockerd
	I1003 17:03:57.959920    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 17:03:57.962534    1527 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 17:03:57.967042    1527 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 17:03:58.045502    1527 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 17:03:58.130551    1527 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 17:03:58.130610    1527 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 17:03:58.135707    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:58.217318    1527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:03:59.375606    1527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158300667s)
	I1003 17:03:59.375671    1527 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:03:59.450200    1527 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 17:03:59.524947    1527 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:03:59.593104    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:59.661107    1527 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 17:03:59.668567    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:03:59.749258    1527 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1003 17:03:59.773399    1527 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 17:03:59.773477    1527 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 17:03:59.775487    1527 start.go:537] Will wait 60s for crictl version
	I1003 17:03:59.775515    1527 ssh_runner.go:195] Run: which crictl
	I1003 17:03:59.776779    1527 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 17:03:59.798500    1527 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1003 17:03:59.798566    1527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:03:59.808398    1527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:03:59.824846    1527 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1003 17:03:59.824981    1527 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1003 17:03:59.826582    1527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:03:59.830378    1527 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:59.830418    1527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:03:59.836052    1527 docker.go:664] Got preloaded images: 
	I1003 17:03:59.836060    1527 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1003 17:03:59.836097    1527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:03:59.839267    1527 ssh_runner.go:195] Run: which lz4
	I1003 17:03:59.840501    1527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1003 17:03:59.841889    1527 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 17:03:59.841901    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I1003 17:04:01.160665    1527 docker.go:628] Took 1.320191 seconds to copy over tarball
	I1003 17:04:01.160727    1527 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 17:04:02.196965    1527 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036251292s)
	I1003 17:04:02.196980    1527 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 17:04:02.212861    1527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:04:02.216574    1527 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1003 17:04:02.221618    1527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:04:02.297119    1527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:04:03.756211    1527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.459112709s)
	I1003 17:04:03.756322    1527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:04:03.768480    1527 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 17:04:03.768491    1527 cache_images.go:84] Images are preloaded, skipping loading
	I1003 17:04:03.768560    1527 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 17:04:03.778188    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:04:03.778197    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:04:03.778215    1527 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 17:04:03.778224    1527 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-585000 NodeName:addons-585000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:04:03.778291    1527 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-585000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:04:03.778325    1527 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-585000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 17:04:03.778383    1527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1003 17:04:03.781266    1527 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:04:03.781296    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:04:03.784487    1527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1003 17:04:03.789671    1527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:04:03.794606    1527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1003 17:04:03.799383    1527 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I1003 17:04:03.800642    1527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:04:03.804557    1527 certs.go:56] Setting up /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000 for IP: 192.168.105.2
	I1003 17:04:03.804566    1527 certs.go:190] acquiring lock for shared ca certs: {Name:mk60f926c1ccb065a30406b60af36acc708e601e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.804722    1527 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key
	I1003 17:04:03.876701    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt ...
	I1003 17:04:03.876706    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt: {Name:mk0cc174d1dbd071293e805ad6149c7ec4b142e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.876904    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key ...
	I1003 17:04:03.876908    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key: {Name:mk5b0f090e1e87c9db61f19ee029eeb4bf325f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.877012    1527 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key
	I1003 17:04:03.972780    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt ...
	I1003 17:04:03.972784    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt: {Name:mk86baa625f8f131b96564e73e4ff47f159af5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.972918    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key ...
	I1003 17:04:03.972921    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key: {Name:mk9131c9bbe858f22b10b784ddbb510d37a1be7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:03.973043    1527 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key
	I1003 17:04:03.973049    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt with IP's: []
	I1003 17:04:04.093588    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt ...
	I1003 17:04:04.093595    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: {Name:mka78906c9a5365a7e95b92135f4b70302d9ca1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.093800    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key ...
	I1003 17:04:04.093804    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.key: {Name:mk6f93cb157b068e90dc54f48279212367ea5933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.093915    1527 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969
	I1003 17:04:04.093926    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 17:04:04.305627    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 ...
	I1003 17:04:04.305631    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969: {Name:mk46c7ebd409ecd36224a01eb936cac8f04632ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.305820    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969 ...
	I1003 17:04:04.305826    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969: {Name:mk401a87dc7fd50281b18296e80430af94b0e1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.305953    1527 certs.go:337] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt
	I1003 17:04:04.306054    1527 certs.go:341] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key
	I1003 17:04:04.306143    1527 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key
	I1003 17:04:04.306160    1527 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt with IP's: []
	I1003 17:04:04.424122    1527 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt ...
	I1003 17:04:04.424130    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt: {Name:mkdf8b0eea5ab20c208335ea1ea4eff82b50060d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.424327    1527 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key ...
	I1003 17:04:04.424330    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key: {Name:mkfa742ebb9620853aeccd91e597f5c286ba74ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:04.424537    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:04:04.424561    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem (1082 bytes)
	I1003 17:04:04.424578    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:04:04.424595    1527 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem (1679 bytes)
	I1003 17:04:04.424928    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 17:04:04.432625    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 17:04:04.440097    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:04:04.447625    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 17:04:04.454640    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:04:04.461322    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 17:04:04.468544    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:04:04.475781    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 17:04:04.482585    1527 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:04:04.489167    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:04:04.495146    1527 ssh_runner.go:195] Run: openssl version
	I1003 17:04:04.497003    1527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:04:04.500457    1527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.502228    1527 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.502254    1527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:04:04.504052    1527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:04:04.507353    1527 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 17:04:04.508658    1527 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 17:04:04.508694    1527 kubeadm.go:404] StartCluster: {Name:addons-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-585000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:04:04.508756    1527 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 17:04:04.518400    1527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:04:04.521199    1527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:04:04.524450    1527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:04:04.527574    1527 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:04:04.527588    1527 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 17:04:04.548358    1527 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1003 17:04:04.548382    1527 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 17:04:04.607082    1527 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:04:04.607142    1527 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:04:04.607188    1527 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 17:04:04.714297    1527 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:04:04.722481    1527 out.go:204]   - Generating certificates and keys ...
	I1003 17:04:04.722514    1527 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 17:04:04.722542    1527 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 17:04:04.759296    1527 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:04:04.808856    1527 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:04:04.964910    1527 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:04:05.087030    1527 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 17:04:05.149596    1527 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 17:04:05.149646    1527 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-585000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1003 17:04:05.222012    1527 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 17:04:05.222068    1527 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-585000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1003 17:04:05.286742    1527 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:04:05.330145    1527 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:04:05.629678    1527 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 17:04:05.629709    1527 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:04:05.731900    1527 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:04:05.854397    1527 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:04:05.983877    1527 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:04:06.151732    1527 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:04:06.152571    1527 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:04:06.153646    1527 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:04:06.157945    1527 out.go:204]   - Booting up control plane ...
	I1003 17:04:06.158024    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:04:06.158075    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:04:06.158108    1527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:04:06.161462    1527 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:04:06.161792    1527 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:04:06.161837    1527 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 17:04:06.249226    1527 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 17:04:09.750516    1527 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501291 seconds
	I1003 17:04:09.750582    1527 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 17:04:09.756399    1527 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 17:04:10.266794    1527 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 17:04:10.266918    1527 kubeadm.go:322] [mark-control-plane] Marking the node addons-585000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 17:04:10.770745    1527 kubeadm.go:322] [bootstrap-token] Using token: uzkazy.ii0fjdqhazr4xlxp
	I1003 17:04:10.779414    1527 out.go:204]   - Configuring RBAC rules ...
	I1003 17:04:10.779490    1527 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 17:04:10.779537    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 17:04:10.781300    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 17:04:10.782670    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 17:04:10.783591    1527 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 17:04:10.784726    1527 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 17:04:10.788735    1527 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 17:04:10.960644    1527 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1003 17:04:11.179744    1527 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1003 17:04:11.180034    1527 kubeadm.go:322] 
	I1003 17:04:11.180066    1527 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1003 17:04:11.180069    1527 kubeadm.go:322] 
	I1003 17:04:11.180099    1527 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1003 17:04:11.180102    1527 kubeadm.go:322] 
	I1003 17:04:11.180113    1527 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1003 17:04:11.180141    1527 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 17:04:11.180162    1527 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 17:04:11.180170    1527 kubeadm.go:322] 
	I1003 17:04:11.180201    1527 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1003 17:04:11.180204    1527 kubeadm.go:322] 
	I1003 17:04:11.180235    1527 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 17:04:11.180239    1527 kubeadm.go:322] 
	I1003 17:04:11.180270    1527 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1003 17:04:11.180305    1527 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 17:04:11.180339    1527 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 17:04:11.180343    1527 kubeadm.go:322] 
	I1003 17:04:11.180390    1527 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 17:04:11.180428    1527 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1003 17:04:11.180432    1527 kubeadm.go:322] 
	I1003 17:04:11.180481    1527 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uzkazy.ii0fjdqhazr4xlxp \
	I1003 17:04:11.180530    1527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 \
	I1003 17:04:11.180544    1527 kubeadm.go:322] 	--control-plane 
	I1003 17:04:11.180546    1527 kubeadm.go:322] 
	I1003 17:04:11.180583    1527 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1003 17:04:11.180585    1527 kubeadm.go:322] 
	I1003 17:04:11.180630    1527 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uzkazy.ii0fjdqhazr4xlxp \
	I1003 17:04:11.180681    1527 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 
	I1003 17:04:11.180859    1527 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:04:11.180869    1527 cni.go:84] Creating CNI manager for ""
	I1003 17:04:11.180879    1527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:04:11.187987    1527 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 17:04:11.191059    1527 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 17:04:11.194034    1527 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1003 17:04:11.198752    1527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 17:04:11.198795    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.198820    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a minikube.k8s.io/name=addons-585000 minikube.k8s.io/updated_at=2023_10_03T17_04_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.257808    1527 ops.go:34] apiserver oom_adj: -16
	I1003 17:04:11.257825    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.294996    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:11.828345    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:12.328361    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:12.828330    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:13.328397    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:13.828294    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:14.328299    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:14.828282    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:15.328311    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:15.828265    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:16.328253    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:16.828279    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:17.328231    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:17.828247    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:18.328230    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:18.828192    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:19.328127    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:19.828168    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:20.328197    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:20.827992    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:21.327973    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:21.828182    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:22.328181    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:22.828070    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:23.328028    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:23.828030    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:24.327999    1527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:04:24.363217    1527 kubeadm.go:1081] duration metric: took 13.164797125s to wait for elevateKubeSystemPrivileges.
	I1003 17:04:24.363234    1527 kubeadm.go:406] StartCluster complete in 19.855065708s
	I1003 17:04:24.363243    1527 settings.go:142] acquiring lock: {Name:mkad5f21e92defa14247d9a0adf05a6e4272cec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:24.363390    1527 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:04:24.363569    1527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/kubeconfig: {Name:mke3e06a6a2057954076f4772b87ca4469721c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:04:24.363810    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 17:04:24.363870    1527 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1003 17:04:24.363932    1527 addons.go:69] Setting volumesnapshots=true in profile "addons-585000"
	I1003 17:04:24.363942    1527 addons.go:231] Setting addon volumesnapshots=true in "addons-585000"
	I1003 17:04:24.363950    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:04:24.363971    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363950    1527 addons.go:69] Setting ingress-dns=true in profile "addons-585000"
	I1003 17:04:24.364021    1527 addons.go:231] Setting addon ingress-dns=true in "addons-585000"
	I1003 17:04:24.364047    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363981    1527 addons.go:69] Setting registry=true in profile "addons-585000"
	I1003 17:04:24.364069    1527 addons.go:231] Setting addon registry=true in "addons-585000"
	I1003 17:04:24.364086    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363984    1527 addons.go:69] Setting inspektor-gadget=true in profile "addons-585000"
	I1003 17:04:24.364126    1527 addons.go:231] Setting addon inspektor-gadget=true in "addons-585000"
	I1003 17:04:24.364190    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363986    1527 addons.go:69] Setting metrics-server=true in profile "addons-585000"
	I1003 17:04:24.364214    1527 addons.go:231] Setting addon metrics-server=true in "addons-585000"
	I1003 17:04:24.364247    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363988    1527 addons.go:69] Setting gcp-auth=true in profile "addons-585000"
	I1003 17:04:24.364271    1527 mustload.go:65] Loading cluster: addons-585000
	W1003 17:04:24.364287    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.364300    1527 addons.go:277] "addons-585000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W1003 17:04:24.364313    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.364319    1527 addons.go:277] "addons-585000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I1003 17:04:24.364322    1527 addons.go:467] Verifying addon registry=true in "addons-585000"
	I1003 17:04:24.370457    1527 out.go:177] * Verifying registry addon...
	I1003 17:04:24.364356    1527 config.go:182] Loaded profile config "addons-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:04:24.363992    1527 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-585000"
	I1003 17:04:24.370476    1527 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-585000"
	I1003 17:04:24.363993    1527 addons.go:69] Setting ingress=true in profile "addons-585000"
	I1003 17:04:24.363995    1527 addons.go:69] Setting cloud-spanner=true in profile "addons-585000"
	I1003 17:04:24.370530    1527 addons.go:231] Setting addon cloud-spanner=true in "addons-585000"
	I1003 17:04:24.363994    1527 addons.go:69] Setting default-storageclass=true in profile "addons-585000"
	I1003 17:04:24.370529    1527 addons.go:231] Setting addon ingress=true in "addons-585000"
	I1003 17:04:24.370591    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.370597    1527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-585000"
	I1003 17:04:24.363996    1527 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-585000"
	W1003 17:04:24.364679    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.370563    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.363989    1527 addons.go:69] Setting storage-provisioner=true in profile "addons-585000"
	I1003 17:04:24.370642    1527 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-585000"
	W1003 17:04:24.370646    1527 addons.go:277] "addons-585000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W1003 17:04:24.370793    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	W1003 17:04:24.370892    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.371573    1527 addons.go:231] Setting addon default-storageclass=true in "addons-585000"
	I1003 17:04:24.371900    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.380387    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1003 17:04:24.383435    1527 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1003 17:04:24.383459    1527 addons.go:231] Setting addon storage-provisioner=true in "addons-585000"
	W1003 17:04:24.383452    1527 addons.go:277] "addons-585000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W1003 17:04:24.383476    1527 addons_storage_classes.go:57] "addons-585000" is not running, writing storage-provisioner-rancher=true to disk and skipping enablement
	I1003 17:04:24.383496    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.383533    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.383999    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1003 17:04:24.386486    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1003 17:04:24.389459    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1003 17:04:24.389462    1527 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-585000"
	I1003 17:04:24.389488    1527 addons.go:467] Verifying addon ingress=true in "addons-585000"
	I1003 17:04:24.389521    1527 host.go:66] Checking if "addons-585000" exists ...
	W1003 17:04:24.389784    1527 host.go:54] host status for "addons-585000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/monitor: connect: connection refused
	I1003 17:04:24.395446    1527 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1003 17:04:24.395461    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:24.395467    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1003 17:04:24.395470    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W1003 17:04:24.395486    1527 addons.go:277] "addons-585000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	I1003 17:04:24.398475    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1003 17:04:24.398484    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.398497    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.407159    1527 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 17:04:24.408397    1527 out.go:177] * Verifying ingress addon...
	I1003 17:04:24.414483    1527 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:04:24.418517    1527 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1003 17:04:24.421445    1527 out.go:177]   - Using image docker.io/busybox:stable
	I1003 17:04:24.423490    1527 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-585000" context rescaled to 1 replicas
	I1003 17:04:24.424496    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1003 17:04:24.427548    1527 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:04:24.427858    1527 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1003 17:04:24.430982    1527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 17:04:24.433444    1527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:04:24.439439    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1003 17:04:24.439555    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.442455    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 17:04:24.442466    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.447401    1527 out.go:177] * Verifying Kubernetes components...
	I1003 17:04:24.451674    1527 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1003 17:04:24.453485    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:04:24.457496    1527 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1003 17:04:24.467442    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1003 17:04:24.480410    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1003 17:04:24.476494    1527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:04:24.483451    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1003 17:04:24.486383    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1003 17:04:24.483462    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.485254    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1003 17:04:24.489434    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1003 17:04:24.492499    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1003 17:04:24.499438    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1003 17:04:24.508438    1527 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1003 17:04:24.512295    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1003 17:04:24.512323    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1003 17:04:24.512325    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1003 17:04:24.512338    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1003 17:04:24.512347    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:24.558122    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:04:24.567387    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1003 17:04:24.567399    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1003 17:04:24.572776    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1003 17:04:24.599074    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1003 17:04:24.599085    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1003 17:04:24.675063    1527 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:04:24.675075    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1003 17:04:24.676328    1527 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1003 17:04:24.676333    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1003 17:04:24.739366    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1003 17:04:24.739377    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1003 17:04:24.749052    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:04:24.770490    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1003 17:04:24.770501    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1003 17:04:24.785371    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:04:24.881017    1527 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:24.881027    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1003 17:04:24.903016    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1003 17:04:24.903028    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1003 17:04:24.969674    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1003 17:04:24.969686    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1003 17:04:25.049120    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:25.150729    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1003 17:04:25.150740    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1003 17:04:25.240313    1527 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1003 17:04:25.240326    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1003 17:04:25.371181    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1003 17:04:25.371193    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1003 17:04:25.404463    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1003 17:04:25.404477    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1003 17:04:25.434986    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1003 17:04:25.434995    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1003 17:04:25.450155    1527 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.007696167s)
	I1003 17:04:25.450172    1527 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1003 17:04:25.450599    1527 node_ready.go:35] waiting up to 6m0s for node "addons-585000" to be "Ready" ...
	I1003 17:04:25.452207    1527 node_ready.go:49] node "addons-585000" has status "Ready":"True"
	I1003 17:04:25.452226    1527 node_ready.go:38] duration metric: took 1.601791ms waiting for node "addons-585000" to be "Ready" ...
	I1003 17:04:25.452231    1527 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:04:25.455004    1527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace to be "Ready" ...
	I1003 17:04:25.474770    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1003 17:04:25.474780    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1003 17:04:25.542278    1527 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:04:25.542289    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1003 17:04:25.568058    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:04:25.900055    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.327297041s)
	I1003 17:04:25.900073    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.151039167s)
	I1003 17:04:25.900620    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.342523s)
	I1003 17:04:26.031761    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246402458s)
	I1003 17:04:26.031779    1527 addons.go:467] Verifying addon metrics-server=true in "addons-585000"
	W1003 17:04:26.031811    1527 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:04:26.031889    1527 retry.go:31] will retry after 325.291799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:04:26.358314    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:04:27.197455    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.629412791s)
	I1003 17:04:27.197475    1527 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-585000"
	I1003 17:04:27.203259    1527 out.go:177] * Verifying csi-hostpath-driver addon...
	I1003 17:04:27.212640    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1003 17:04:27.217488    1527 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 17:04:27.217496    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:27.220905    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:27.463742    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:27.724717    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.224875    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.724679    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:28.978938    1527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.620667959s)
	I1003 17:04:29.224602    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:29.725503    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:29.965298    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:30.225992    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:30.727405    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:30.990463    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1003 17:04:30.990480    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:31.027829    1527 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1003 17:04:31.032721    1527 addons.go:231] Setting addon gcp-auth=true in "addons-585000"
	I1003 17:04:31.032748    1527 host.go:66] Checking if "addons-585000" exists ...
	I1003 17:04:31.033437    1527 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1003 17:04:31.033445    1527 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/addons-585000/id_rsa Username:docker}
	I1003 17:04:31.067911    1527 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1003 17:04:31.071726    1527 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1003 17:04:31.074751    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1003 17:04:31.074757    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1003 17:04:31.080083    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1003 17:04:31.080090    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1003 17:04:31.084923    1527 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:04:31.084929    1527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1003 17:04:31.089910    1527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:04:31.227750    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:31.348093    1527 addons.go:467] Verifying addon gcp-auth=true in "addons-585000"
	I1003 17:04:31.351542    1527 out.go:177] * Verifying gcp-auth addon...
	I1003 17:04:31.357561    1527 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1003 17:04:31.360910    1527 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1003 17:04:31.360918    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:31.363860    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:31.729484    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:31.868680    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:32.231013    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:32.368635    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:32.469496    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:32.731026    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:32.869106    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:33.231972    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:33.370303    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:33.732757    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:33.870865    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.233653    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:34.371764    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.734158    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:34.872420    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:34.973595    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:35.235439    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:35.373126    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:35.736125    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:35.873867    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:36.236732    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:36.377065    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:36.738038    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:36.875114    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:37.237689    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:37.375767    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:37.476370    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:37.738419    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:37.876233    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:38.238661    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:38.376555    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:38.739112    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:38.877451    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:39.239515    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:39.378140    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:39.479966    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:39.740793    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:39.878589    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:40.240762    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:40.379208    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:40.741780    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:40.879681    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.241996    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:41.380297    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.742785    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:41.880492    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:41.981384    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:42.243296    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:42.381074    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:42.743302    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:42.883349    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:43.244024    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:43.381895    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:43.744586    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:43.882405    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:44.244990    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:44.382844    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:44.483439    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:44.745332    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:44.884458    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:45.245668    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:45.383767    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:45.746153    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:45.884389    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:46.246346    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:46.384557    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:46.486038    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:46.746583    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:46.885125    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:47.247216    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:47.385576    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:47.747761    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:47.885573    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.247887    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:48.385997    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.748143    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:48.886329    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:48.987206    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:49.248369    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:49.386625    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:49.749134    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:49.886585    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.249103    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:50.386927    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.749495    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:50.887744    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:50.988394    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:51.249792    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:51.387776    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:51.749720    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:51.887719    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:52.249981    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:52.389544    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:52.751121    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:52.888557    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:53.250511    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:53.388659    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:53.490543    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:53.750650    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:53.888553    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:54.250803    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:54.389104    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:54.751294    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:54.889472    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.253309    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:55.389552    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.751715    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:55.889618    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:55.990684    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:56.251672    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:56.389851    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:56.752031    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:56.889910    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.252259    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:57.390082    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.752574    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:57.890363    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:57.991811    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:04:58.252678    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:04:58.390362    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:58.752590    1527 kapi.go:107] duration metric: took 31.511793709s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1003 17:04:58.890958    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:59.391228    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:04:59.891535    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:00.391427    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:00.491918    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:05:00.891593    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:01.391491    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:01.891778    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:02.391830    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:02.492542    1527 pod_ready.go:102] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"False"
	I1003 17:05:02.891986    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:03.393570    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:03.892644    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.392440    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.892576    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:04.992969    1527 pod_ready.go:92] pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.992976    1527 pod_ready.go:81] duration metric: took 39.508226208s waiting for pod "coredns-5dd5756b68-khk2s" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.992981    1527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.993883    1527 pod_ready.go:97] error getting pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r24k8" not found
	I1003 17:05:04.993891    1527 pod_ready.go:81] duration metric: took 907.208µs waiting for pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace to be "Ready" ...
	E1003 17:05:04.993895    1527 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-r24k8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r24k8" not found
	I1003 17:05:04.993899    1527 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.996401    1527 pod_ready.go:92] pod "etcd-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.996406    1527 pod_ready.go:81] duration metric: took 2.500458ms waiting for pod "etcd-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.996410    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.999071    1527 pod_ready.go:92] pod "kube-apiserver-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:04.999079    1527 pod_ready.go:81] duration metric: took 2.666208ms waiting for pod "kube-apiserver-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:04.999082    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.001717    1527 pod_ready.go:92] pod "kube-controller-manager-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.001725    1527 pod_ready.go:81] duration metric: took 2.637584ms waiting for pod "kube-controller-manager-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.001728    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m9nm" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.193135    1527 pod_ready.go:92] pod "kube-proxy-4m9nm" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.193144    1527 pod_ready.go:81] duration metric: took 191.372792ms waiting for pod "kube-proxy-4m9nm" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.193148    1527 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.392551    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:05.593390    1527 pod_ready.go:92] pod "kube-scheduler-addons-585000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:05:05.593399    1527 pod_ready.go:81] duration metric: took 400.165917ms waiting for pod "kube-scheduler-addons-585000" in "kube-system" namespace to be "Ready" ...
	I1003 17:05:05.593404    1527 pod_ready.go:38] duration metric: took 40.11130675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:05:05.593416    1527 api_server.go:52] waiting for apiserver process to appear ...
	I1003 17:05:05.593483    1527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 17:05:05.598747    1527 api_server.go:72] duration metric: took 41.126444291s to wait for apiserver process to appear ...
	I1003 17:05:05.598757    1527 api_server.go:88] waiting for apiserver healthz status ...
	I1003 17:05:05.598764    1527 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I1003 17:05:05.601839    1527 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I1003 17:05:05.602592    1527 api_server.go:141] control plane version: v1.28.2
	I1003 17:05:05.602598    1527 api_server.go:131] duration metric: took 3.8385ms to wait for apiserver health ...
	I1003 17:05:05.602602    1527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 17:05:05.796281    1527 system_pods.go:59] 13 kube-system pods found
	I1003 17:05:05.796294    1527 system_pods.go:61] "coredns-5dd5756b68-khk2s" [c45559e9-de80-4305-942c-094315d94d47] Running
	I1003 17:05:05.796297    1527 system_pods.go:61] "csi-hostpath-attacher-0" [63177292-e73a-431c-a767-00cf3ce9bce0] Running
	I1003 17:05:05.796299    1527 system_pods.go:61] "csi-hostpath-resizer-0" [8f737860-0999-4db9-9f65-190d64a4cfb4] Running
	I1003 17:05:05.796301    1527 system_pods.go:61] "csi-hostpathplugin-8thxw" [4548034b-9d0c-4d9d-9f9f-839610935d97] Running
	I1003 17:05:05.796303    1527 system_pods.go:61] "etcd-addons-585000" [f1858d65-44e4-469b-8372-a4e1e0a14d48] Running
	I1003 17:05:05.796306    1527 system_pods.go:61] "kube-apiserver-addons-585000" [377e1779-3201-42b4-945b-e2195b3f0a9a] Running
	I1003 17:05:05.796308    1527 system_pods.go:61] "kube-controller-manager-addons-585000" [f9156947-57be-49f5-9a1c-aadaa5bddf0c] Running
	I1003 17:05:05.796310    1527 system_pods.go:61] "kube-proxy-4m9nm" [847e005d-c7e1-4128-9ac6-2fef6730e3e4] Running
	I1003 17:05:05.796312    1527 system_pods.go:61] "kube-scheduler-addons-585000" [5b3e2487-5781-451e-b566-814b86a4eb80] Running
	I1003 17:05:05.796315    1527 system_pods.go:61] "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
	I1003 17:05:05.796317    1527 system_pods.go:61] "snapshot-controller-58dbcc7b99-2gg5z" [3f9fa5f9-6948-4b90-9114-e5bdb6822c87] Running
	I1003 17:05:05.796319    1527 system_pods.go:61] "snapshot-controller-58dbcc7b99-t2wrg" [174abadb-36b9-4fab-b1e2-44fa5de2def5] Running
	I1003 17:05:05.796321    1527 system_pods.go:61] "storage-provisioner" [41c3bd79-1e45-45d3-a1d8-9f5a2c5c7da5] Running
	I1003 17:05:05.796324    1527 system_pods.go:74] duration metric: took 193.680291ms to wait for pod list to return data ...
	I1003 17:05:05.796329    1527 default_sa.go:34] waiting for default service account to be created ...
	I1003 17:05:05.892491    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:05.992623    1527 default_sa.go:45] found service account: "default"
	I1003 17:05:05.992632    1527 default_sa.go:55] duration metric: took 196.262458ms for default service account to be created ...
	I1003 17:05:05.992638    1527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 17:05:06.196272    1527 system_pods.go:86] 13 kube-system pods found
	I1003 17:05:06.196281    1527 system_pods.go:89] "coredns-5dd5756b68-khk2s" [c45559e9-de80-4305-942c-094315d94d47] Running
	I1003 17:05:06.196284    1527 system_pods.go:89] "csi-hostpath-attacher-0" [63177292-e73a-431c-a767-00cf3ce9bce0] Running
	I1003 17:05:06.196286    1527 system_pods.go:89] "csi-hostpath-resizer-0" [8f737860-0999-4db9-9f65-190d64a4cfb4] Running
	I1003 17:05:06.196288    1527 system_pods.go:89] "csi-hostpathplugin-8thxw" [4548034b-9d0c-4d9d-9f9f-839610935d97] Running
	I1003 17:05:06.196290    1527 system_pods.go:89] "etcd-addons-585000" [f1858d65-44e4-469b-8372-a4e1e0a14d48] Running
	I1003 17:05:06.196292    1527 system_pods.go:89] "kube-apiserver-addons-585000" [377e1779-3201-42b4-945b-e2195b3f0a9a] Running
	I1003 17:05:06.196294    1527 system_pods.go:89] "kube-controller-manager-addons-585000" [f9156947-57be-49f5-9a1c-aadaa5bddf0c] Running
	I1003 17:05:06.196296    1527 system_pods.go:89] "kube-proxy-4m9nm" [847e005d-c7e1-4128-9ac6-2fef6730e3e4] Running
	I1003 17:05:06.196298    1527 system_pods.go:89] "kube-scheduler-addons-585000" [5b3e2487-5781-451e-b566-814b86a4eb80] Running
	I1003 17:05:06.196300    1527 system_pods.go:89] "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
	I1003 17:05:06.196302    1527 system_pods.go:89] "snapshot-controller-58dbcc7b99-2gg5z" [3f9fa5f9-6948-4b90-9114-e5bdb6822c87] Running
	I1003 17:05:06.196304    1527 system_pods.go:89] "snapshot-controller-58dbcc7b99-t2wrg" [174abadb-36b9-4fab-b1e2-44fa5de2def5] Running
	I1003 17:05:06.196306    1527 system_pods.go:89] "storage-provisioner" [41c3bd79-1e45-45d3-a1d8-9f5a2c5c7da5] Running
	I1003 17:05:06.196310    1527 system_pods.go:126] duration metric: took 203.63025ms to wait for k8s-apps to be running ...
	I1003 17:05:06.196313    1527 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 17:05:06.196369    1527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:05:06.201921    1527 system_svc.go:56] duration metric: took 5.604333ms WaitForService to wait for kubelet.
	I1003 17:05:06.201929    1527 kubeadm.go:581] duration metric: took 41.729511541s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1003 17:05:06.201940    1527 node_conditions.go:102] verifying NodePressure condition ...
	I1003 17:05:06.392698    1527 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:05:06.393051    1527 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1003 17:05:06.393087    1527 node_conditions.go:123] node cpu capacity is 2
	I1003 17:05:06.393093    1527 node_conditions.go:105] duration metric: took 191.114375ms to run NodePressure ...
	I1003 17:05:06.393098    1527 start.go:228] waiting for startup goroutines ...
	I1003 17:05:06.892943    1527 kapi.go:107] duration metric: took 35.5093015s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1003 17:05:06.897190    1527 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-585000 cluster.
	I1003 17:05:06.900088    1527 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1003 17:05:06.903114    1527 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1003 17:10:24.417571    1527 kapi.go:107] duration metric: took 6m0.007521708s to wait for kubernetes.io/minikube-addons=registry ...
	W1003 17:10:24.417687    1527 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1003 17:10:24.469744    1527 kapi.go:107] duration metric: took 6m0.015851042s to wait for app.kubernetes.io/name=ingress-nginx ...
	W1003 17:10:24.469781    1527 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I1003 17:10:24.476811    1527 out.go:177] * Enabled addons: ingress-dns, inspektor-gadget, default-storageclass, cloud-spanner, storage-provisioner-rancher, storage-provisioner, metrics-server, volumesnapshots, csi-hostpath-driver, gcp-auth
	I1003 17:10:24.483852    1527 addons.go:502] enable addons completed in 6m0.093974291s: enabled=[ingress-dns inspektor-gadget default-storageclass cloud-spanner storage-provisioner-rancher storage-provisioner metrics-server volumesnapshots csi-hostpath-driver gcp-auth]
	I1003 17:10:24.483865    1527 start.go:233] waiting for cluster config update ...
	I1003 17:10:24.483873    1527 start.go:242] writing updated cluster config ...
	I1003 17:10:24.484152    1527 ssh_runner.go:195] Run: rm -f paused
	I1003 17:10:24.580795    1527 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I1003 17:10:24.588925    1527 out.go:177] * Done! kubectl is now configured to use "addons-585000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-04 00:03:51 UTC, ends at Wed 2023-10-04 00:20:19 UTC. --
	Oct 04 00:12:08 addons-585000 dockerd[1122]: time="2023-10-04T00:12:08.349007007Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.240701963Z" level=info msg="ignoring event" container=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241579343Z" level=info msg="shim disconnected" id=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241698010Z" level=warning msg="cleaning up after shim disconnected" id=5ff0bef51f286679f0f3e673886c6c3125c354cab0674eabec453da6975a6991 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.241717594Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.242239805Z" level=info msg="ignoring event" container=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242324306Z" level=info msg="shim disconnected" id=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242883392Z" level=warning msg="cleaning up after shim disconnected" id=517d59bdac0d4789df6d0fe970fb5d29190888751abb5196cf7cdf1d701cec03 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.242933309Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323347774Z" level=info msg="shim disconnected" id=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323376233Z" level=warning msg="cleaning up after shim disconnected" id=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.323380274Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.323530942Z" level=info msg="ignoring event" container=4f4a827bdb5aaf43cabfbf9cb0a34b674a64751dd5011a061778c8476e531e8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1115]: time="2023-10-04T00:12:14.329811145Z" level=info msg="ignoring event" container=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329912979Z" level=info msg="shim disconnected" id=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329940521Z" level=warning msg="cleaning up after shim disconnected" id=c09f627f07f68d859d6bfdefb05a6e13d8c3efbab921fb36772c99b884785536 namespace=moby
	Oct 04 00:12:14 addons-585000 dockerd[1122]: time="2023-10-04T00:12:14.329944687Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1115]: time="2023-10-04T00:12:20.574199852Z" level=info msg="ignoring event" container=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574490520Z" level=info msg="shim disconnected" id=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574591896Z" level=warning msg="cleaning up after shim disconnected" id=ecef984296b3c318eabdfadfcfd794578d7d5b79b06b6d15401d400fa2c41351 namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.574612355Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1115]: time="2023-10-04T00:12:20.629572097Z" level=info msg="ignoring event" container=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629930724Z" level=info msg="shim disconnected" id=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629958933Z" level=warning msg="cleaning up after shim disconnected" id=7c628d57a23adad9e457875314fcdc47575df49afc2fd60f3e2f12f92707c23a namespace=moby
	Oct 04 00:12:20 addons-585000 dockerd[1122]: time="2023-10-04T00:12:20.629963350Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fe1ae4df5fc3       ghcr.io/headlamp-k8s/headlamp@sha256:bb15916c96306cd14f1c9c09c639d01d1d1fb854fd770bf99f3e7a9deb584753          8 minutes ago       Running             headlamp                  0                   ef032c077a9bb       headlamp-58b88cff49-pkdpk
	4999c3b7505c2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   15 minutes ago      Running             gcp-auth                  0                   0baad7da81cca       gcp-auth-d4c87556c-fbd9n
	59046ced6b465       ba04bb24b9575                                                                                                  15 minutes ago      Running             storage-provisioner       0                   422eeee32f3c7       storage-provisioner
	1cabeffdf46fd       97e04611ad434                                                                                                  15 minutes ago      Running             coredns                   0                   94eeb18b87283       coredns-5dd5756b68-khk2s
	1056d28082563       7da62c127fc0f                                                                                                  15 minutes ago      Running             kube-proxy                0                   af3e361d0b85e       kube-proxy-4m9nm
	92000aefe5383       9cdd6470f48c8                                                                                                  16 minutes ago      Running             etcd                      0                   95e83160dff6c       etcd-addons-585000
	a8e887332d59e       64fc40cee3716                                                                                                  16 minutes ago      Running             kube-scheduler            0                   61eb9114b1a5a       kube-scheduler-addons-585000
	0cc0200950b6e       30bb499447fe1                                                                                                  16 minutes ago      Running             kube-apiserver            0                   d88b38dfea206       kube-apiserver-addons-585000
	c400229c491d5       89d57b83c1786                                                                                                  16 minutes ago      Running             kube-controller-manager   0                   5520678c190ce       kube-controller-manager-addons-585000
	
	* 
	* ==> coredns [1cabeffdf46f] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51045 - 1814 "HINFO IN 7601180359592532728.73972322534061862. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.004583475s
	[INFO] 10.244.0.13:48220 - 27508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122275s
	[INFO] 10.244.0.13:58104 - 55098 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000039883s
	[INFO] 10.244.0.13:58849 - 26333 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000042675s
	[INFO] 10.244.0.13:53313 - 23208 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000023463s
	[INFO] 10.244.0.13:34872 - 37616 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023379s
	[INFO] 10.244.0.13:36929 - 11673 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023963s
	[INFO] 10.244.0.13:46593 - 5724 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001098011s
	[INFO] 10.244.0.13:48407 - 27925 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001132685s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-585000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-585000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a
	                    minikube.k8s.io/name=addons-585000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_03T17_04_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-585000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-585000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:20:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 00:17:27 +0000   Wed, 04 Oct 2023 00:04:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-585000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0684753a9d5543b6bf7bf60f67ba1317
	  System UUID:                0684753a9d5543b6bf7bf60f67ba1317
	  Boot ID:                    b5c3b3eb-78df-44c0-a5f8-68774932e45d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-fbd9n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  headlamp                    headlamp-58b88cff49-pkdpk                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 coredns-5dd5756b68-khk2s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     15m
	  kube-system                 etcd-addons-585000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-addons-585000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-addons-585000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4m9nm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-addons-585000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-585000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-585000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-585000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node addons-585000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node addons-585000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node addons-585000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m                kubelet          Node addons-585000 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-585000 event: Registered Node addons-585000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.495022] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043117] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000788] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.182267] systemd-fstab-generator[489]: Ignoring "noauto" for root device
	[  +0.079218] systemd-fstab-generator[500]: Ignoring "noauto" for root device
	[  +0.426961] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.177441] systemd-fstab-generator[742]: Ignoring "noauto" for root device
	[  +0.084046] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +0.086428] systemd-fstab-generator[766]: Ignoring "noauto" for root device
	[  +1.232741] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +0.076723] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.068940] systemd-fstab-generator[946]: Ignoring "noauto" for root device
	[  +0.067661] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +0.086560] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[Oct 4 00:04] systemd-fstab-generator[1108]: Ignoring "noauto" for root device
	[  +1.419313] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.528330] systemd-fstab-generator[1477]: Ignoring "noauto" for root device
	[  +4.623780] systemd-fstab-generator[2363]: Ignoring "noauto" for root device
	[ +14.190759] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.496510] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +0.762652] kauditd_printk_skb: 67 callbacks suppressed
	[ +18.036762] kauditd_printk_skb: 10 callbacks suppressed
	[Oct 4 00:10] kauditd_printk_skb: 4 callbacks suppressed
	[Oct 4 00:11] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 4 00:12] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [92000aefe538] <==
	* {"level":"info","ts":"2023-10-04T00:04:07.332412Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-10-04T00:04:07.779564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-10-04T00:04:07.779722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.779794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-10-04T00:04:07.780671Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-585000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T00:04:07.780756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:04:07.781263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-10-04T00:04:07.781336Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781426Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:04:07.781791Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.781874Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:04:07.782126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T00:04:07.782156Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T00:04:07.782572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T00:14:07.794073Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1195}
	{"level":"info","ts":"2023-10-04T00:14:07.809078Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1195,"took":"14.718788ms","hash":1561751362}
	{"level":"info","ts":"2023-10-04T00:14:07.809096Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1561751362,"revision":1195,"compact-revision":-1}
	{"level":"info","ts":"2023-10-04T00:19:07.797282Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
	{"level":"info","ts":"2023-10-04T00:19:07.80972Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1852,"took":"12.234463ms","hash":1586820359}
	{"level":"info","ts":"2023-10-04T00:19:07.809734Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1586820359,"revision":1852,"compact-revision":1195}
	
	* 
	* ==> gcp-auth [4999c3b7505c] <==
	* 2023/10/04 00:05:05 GCP Auth Webhook started!
	2023/10/04 00:10:24 Ready to marshal response ...
	2023/10/04 00:10:24 Ready to write response ...
	2023/10/04 00:10:24 Ready to marshal response ...
	2023/10/04 00:10:24 Ready to write response ...
	2023/10/04 00:10:34 Ready to marshal response ...
	2023/10/04 00:10:34 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:22 Ready to marshal response ...
	2023/10/04 00:11:22 Ready to write response ...
	2023/10/04 00:11:37 Ready to marshal response ...
	2023/10/04 00:11:37 Ready to write response ...
	2023/10/04 00:11:58 Ready to marshal response ...
	2023/10/04 00:11:58 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:20:20 up 16 min,  0 users,  load average: 0.16, 0.20, 0.18
	Linux addons-585000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0cc0200950b6] <==
	* E1004 00:10:50.314788       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1004 00:11:08.313792       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 00:11:22.215599       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.244.116"}
	I1004 00:11:47.633142       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1004 00:12:08.316300       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.160595       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.160622       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.167272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.167291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.174430       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.174447       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.177283       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.177295       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.179264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.179278       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.184100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.184112       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.189220       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.189231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:12:14.192620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:12:14.192633       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 00:12:15.177382       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 00:12:15.185019       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 00:12:15.201033       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 00:12:35.289747       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [c400229c491d] <==
	* E1004 00:17:06.657567       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:17:30.734478       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:17:30.734496       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:17:43.686942       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:17:43.686957       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:17:47.974020       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:17:47.974036       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:18:20.227624       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:18:20.227649       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:18:22.080967       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:18:22.080986       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:18:42.876081       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:18:42.876101       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:15.038875       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:15.038894       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:17.102732       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:17.102747       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:19:36.954598       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:19:36.954615       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:05.388788       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:05.388811       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:10.923425       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:10.923443       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:20:13.934126       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:20:13.934249       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [1056d2808256] <==
	* I1004 00:04:24.994967       1 server_others.go:69] "Using iptables proxy"
	I1004 00:04:25.008728       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I1004 00:04:25.054070       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 00:04:25.054092       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 00:04:25.054953       1 server_others.go:152] "Using iptables Proxier"
	I1004 00:04:25.054989       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 00:04:25.055146       1 server.go:846] "Version info" version="v1.28.2"
	I1004 00:04:25.055152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:04:25.056159       1 config.go:188] "Starting service config controller"
	I1004 00:04:25.056167       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 00:04:25.056185       1 config.go:97] "Starting endpoint slice config controller"
	I1004 00:04:25.056188       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 00:04:25.056455       1 config.go:315] "Starting node config controller"
	I1004 00:04:25.056459       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 00:04:25.157056       1 shared_informer.go:318] Caches are synced for node config
	I1004 00:04:25.157076       1 shared_informer.go:318] Caches are synced for service config
	I1004 00:04:25.157088       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a8e887332d59] <==
	* W1004 00:04:08.386414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:04:08.386433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 00:04:08.386500       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:04:08.386521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 00:04:08.386548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:04:08.386567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 00:04:08.386603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:04:08.386625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 00:04:08.386669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:04:08.386696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 00:04:08.386739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:04:08.386765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 00:04:08.386786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:04:08.386793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:04:08.386810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:04:08.386876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:04:09.204848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:04:09.204864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:04:09.297780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 00:04:09.297794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 00:04:09.358667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:04:09.358688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:04:09.389038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:04:09.389085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1004 00:04:09.875156       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:03:51 UTC, ends at Wed 2023-10-04 00:20:20 UTC. --
	Oct 04 00:15:10 addons-585000 kubelet[2369]: E1004 00:15:10.892515    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:15:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:15:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:15:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:16:10 addons-585000 kubelet[2369]: E1004 00:16:10.892289    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:16:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:16:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:16:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:17:10 addons-585000 kubelet[2369]: E1004 00:17:10.891861    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:17:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:17:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:17:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:18:10 addons-585000 kubelet[2369]: E1004 00:18:10.892149    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:18:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:18:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:18:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:19:10 addons-585000 kubelet[2369]: E1004 00:19:10.892588    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:19:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:19:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:19:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 00:19:10 addons-585000 kubelet[2369]: W1004 00:19:10.906241    2369 machine.go:65] Cannot read vendor id correctly, set empty.
	Oct 04 00:20:10 addons-585000 kubelet[2369]: E1004 00:20:10.891904    2369 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 00:20:10 addons-585000 kubelet[2369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 00:20:10 addons-585000 kubelet[2369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 00:20:10 addons-585000 kubelet[2369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [59046ced6b46] <==
	* I1004 00:04:26.966816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:04:26.974630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:04:26.974651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:04:26.979059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:04:26.979221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-585000_78b745e2-b305-4877-9367-ded1aea23542!
	I1004 00:04:26.979729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7fdf1027-c160-41cc-988a-74718f8f9c77", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-585000_78b745e2-b305-4877-9367-ded1aea23542 became leader
	I1004 00:04:27.081357       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-585000_78b745e2-b305-4877-9367-ded1aea23542!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-585000 -n addons-585000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-585000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (480.86s)

                                                
                                    
x
+
TestCertOptions (10.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-032000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E1003 17:35:05.731222    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-032000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.924108625s)

                                                
                                                
-- stdout --
	* [cert-options-032000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-032000 in cluster cert-options-032000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-032000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-032000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-032000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-032000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (75.184958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-032000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-032000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-032000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-032000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-032000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.719209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-032000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-032000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-032000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-10-03 17:35:12.994004 -0700 PDT m=+1912.217314209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-032000 -n cert-options-032000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-032000 -n cert-options-032000: exit status 7 (28.6965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-032000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-032000
--- FAIL: TestCertOptions (10.20s)

                                                
                                    
x
+
TestCertExpiration (195.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.842070542s)

                                                
                                                
-- stdout --
	* [cert-expiration-876000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-876000 in cluster cert-expiration-876000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-876000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.286385834s)

                                                
                                                
-- stdout --
	* [cert-expiration-876000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-876000 in cluster cert-expiration-876000
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-876000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-876000 in cluster cert-expiration-876000
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-10-03 17:38:13.057139 -0700 PDT m=+2092.283968168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-876000 -n cert-expiration-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-876000 -n cert-expiration-876000: exit status 7 (50.859542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-876000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-876000
--- FAIL: TestCertExpiration (195.33s)

                                                
                                    
x
+
TestDockerFlags (9.92s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-105000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-105000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.676390208s)

                                                
                                                
-- stdout --
	* [docker-flags-105000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-105000 in cluster docker-flags-105000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-105000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:34:53.028951    3934 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:34:53.029100    3934 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:53.029103    3934 out.go:309] Setting ErrFile to fd 2...
	I1003 17:34:53.029106    3934 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:53.029247    3934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:34:53.030305    3934 out.go:303] Setting JSON to false
	I1003 17:34:53.046355    3934 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2067,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:34:53.046450    3934 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:34:53.052420    3934 out.go:177] * [docker-flags-105000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:34:53.060346    3934 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:34:53.060414    3934 notify.go:220] Checking for updates...
	I1003 17:34:53.065322    3934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:34:53.068306    3934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:34:53.071397    3934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:34:53.074315    3934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:34:53.077232    3934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:34:53.080683    3934 config.go:182] Loaded profile config "force-systemd-flag-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:53.080746    3934 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:53.080799    3934 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:34:53.085363    3934 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:34:53.092289    3934 start.go:298] selected driver: qemu2
	I1003 17:34:53.092297    3934 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:34:53.092303    3934 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:34:53.094601    3934 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:34:53.097327    3934 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:34:53.100299    3934 start_flags.go:918] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1003 17:34:53.100324    3934 cni.go:84] Creating CNI manager for ""
	I1003 17:34:53.100333    3934 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:34:53.100338    3934 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:34:53.100344    3934 start_flags.go:321] config:
	{Name:docker-flags-105000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-105000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:34:53.104823    3934 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:53.112328    3934 out.go:177] * Starting control plane node docker-flags-105000 in cluster docker-flags-105000
	I1003 17:34:53.116302    3934 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:34:53.116316    3934 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:34:53.116328    3934 cache.go:57] Caching tarball of preloaded images
	I1003 17:34:53.116393    3934 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:34:53.116399    3934 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:34:53.116473    3934 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/docker-flags-105000/config.json ...
	I1003 17:34:53.116486    3934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/docker-flags-105000/config.json: {Name:mk0f7c969eae1e8e42d18537b63610415b7e137c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:34:53.116714    3934 start.go:365] acquiring machines lock for docker-flags-105000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:53.116745    3934 start.go:369] acquired machines lock for "docker-flags-105000" in 25.166µs
	I1003 17:34:53.116756    3934 start.go:93] Provisioning new machine with config: &{Name:docker-flags-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-105000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:53.116792    3934 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:53.125261    3934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:53.142256    3934 start.go:159] libmachine.API.Create for "docker-flags-105000" (driver="qemu2")
	I1003 17:34:53.142282    3934 client.go:168] LocalClient.Create starting
	I1003 17:34:53.142341    3934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:53.142372    3934 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:53.142385    3934 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:53.142425    3934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:53.142446    3934 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:53.142454    3934 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:53.142803    3934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:53.253802    3934 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:53.309126    3934 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:53.309132    3934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:53.309276    3934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2
	I1003 17:34:53.317959    3934 main.go:141] libmachine: STDOUT: 
	I1003 17:34:53.317978    3934 main.go:141] libmachine: STDERR: 
	I1003 17:34:53.318029    3934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2 +20000M
	I1003 17:34:53.325427    3934 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:53.325445    3934 main.go:141] libmachine: STDERR: 
	I1003 17:34:53.325464    3934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2
	I1003 17:34:53.325470    3934 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:53.325507    3934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:30:c8:2e:76:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2
	I1003 17:34:53.327131    3934 main.go:141] libmachine: STDOUT: 
	I1003 17:34:53.327145    3934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:53.327170    3934 client.go:171] LocalClient.Create took 184.886042ms
	I1003 17:34:55.329348    3934 start.go:128] duration metric: createHost completed in 2.212575208s
	I1003 17:34:55.329440    3934 start.go:83] releasing machines lock for "docker-flags-105000", held for 2.212698333s
	W1003 17:34:55.329507    3934 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:55.342655    3934 out.go:177] * Deleting "docker-flags-105000" in qemu2 ...
	W1003 17:34:55.362851    3934 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:55.362880    3934 start.go:703] Will try again in 5 seconds ...
	I1003 17:35:00.364966    3934 start.go:365] acquiring machines lock for docker-flags-105000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:35:00.365383    3934 start.go:369] acquired machines lock for "docker-flags-105000" in 311.5µs
	I1003 17:35:00.365494    3934 start.go:93] Provisioning new machine with config: &{Name:docker-flags-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-105000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:35:00.365778    3934 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:35:00.375050    3934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:35:00.424565    3934 start.go:159] libmachine.API.Create for "docker-flags-105000" (driver="qemu2")
	I1003 17:35:00.424661    3934 client.go:168] LocalClient.Create starting
	I1003 17:35:00.424939    3934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:35:00.425029    3934 main.go:141] libmachine: Decoding PEM data...
	I1003 17:35:00.425060    3934 main.go:141] libmachine: Parsing certificate...
	I1003 17:35:00.425177    3934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:35:00.425227    3934 main.go:141] libmachine: Decoding PEM data...
	I1003 17:35:00.425255    3934 main.go:141] libmachine: Parsing certificate...
	I1003 17:35:00.426214    3934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:35:00.549073    3934 main.go:141] libmachine: Creating SSH key...
	I1003 17:35:00.617420    3934 main.go:141] libmachine: Creating Disk image...
	I1003 17:35:00.617427    3934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:35:00.617576    3934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2
	I1003 17:35:00.626552    3934 main.go:141] libmachine: STDOUT: 
	I1003 17:35:00.626576    3934 main.go:141] libmachine: STDERR: 
	I1003 17:35:00.626646    3934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2 +20000M
	I1003 17:35:00.634390    3934 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:35:00.634403    3934 main.go:141] libmachine: STDERR: 
	I1003 17:35:00.634417    3934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2
	I1003 17:35:00.634425    3934 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:35:00.634468    3934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7a:f7:7f:36:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/docker-flags-105000/disk.qcow2
	I1003 17:35:00.636124    3934 main.go:141] libmachine: STDOUT: 
	I1003 17:35:00.636140    3934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:35:00.636156    3934 client.go:171] LocalClient.Create took 211.49225ms
	I1003 17:35:02.638336    3934 start.go:128] duration metric: createHost completed in 2.272562042s
	I1003 17:35:02.638430    3934 start.go:83] releasing machines lock for "docker-flags-105000", held for 2.273065583s
	W1003 17:35:02.638977    3934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:35:02.648666    3934 out.go:177] 
	W1003 17:35:02.652767    3934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:35:02.652809    3934 out.go:239] * 
	* 
	W1003 17:35:02.655515    3934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:35:02.665628    3934 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-105000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-105000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-105000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (75.102375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-105000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-105000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-105000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-105000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-105000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-105000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.723792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-105000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-105000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-105000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-105000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-10-03 17:35:02.799997 -0700 PDT m=+1902.023108251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-105000 -n docker-flags-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-105000 -n docker-flags-105000: exit status 7 (27.638792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-105000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-105000
--- FAIL: TestDockerFlags (9.92s)

                                                
                                    
x
+
TestForceSystemdFlag (10.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-616000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-616000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.798370875s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-616000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-616000 in cluster force-systemd-flag-616000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:34:47.948168    3911 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:34:47.948330    3911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:47.948332    3911 out.go:309] Setting ErrFile to fd 2...
	I1003 17:34:47.948335    3911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:47.948475    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:34:47.949505    3911 out.go:303] Setting JSON to false
	I1003 17:34:47.965413    3911 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2061,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:34:47.965508    3911 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:34:47.971463    3911 out.go:177] * [force-systemd-flag-616000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:34:47.978515    3911 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:34:47.981424    3911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:34:47.978571    3911 notify.go:220] Checking for updates...
	I1003 17:34:47.987352    3911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:34:47.990482    3911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:34:47.993418    3911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:34:47.996378    3911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:34:47.999775    3911 config.go:182] Loaded profile config "force-systemd-env-769000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:47.999841    3911 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:47.999886    3911 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:34:48.004473    3911 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:34:48.011446    3911 start.go:298] selected driver: qemu2
	I1003 17:34:48.011452    3911 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:34:48.011458    3911 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:34:48.013691    3911 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:34:48.017453    3911 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:34:48.021441    3911 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:34:48.021469    3911 cni.go:84] Creating CNI manager for ""
	I1003 17:34:48.021477    3911 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:34:48.021482    3911 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:34:48.021491    3911 start_flags.go:321] config:
	{Name:force-systemd-flag-616000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:34:48.025971    3911 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:48.033364    3911 out.go:177] * Starting control plane node force-systemd-flag-616000 in cluster force-systemd-flag-616000
	I1003 17:34:48.037383    3911 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:34:48.037398    3911 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:34:48.037404    3911 cache.go:57] Caching tarball of preloaded images
	I1003 17:34:48.037453    3911 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:34:48.037459    3911 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:34:48.037511    3911 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/force-systemd-flag-616000/config.json ...
	I1003 17:34:48.037522    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/force-systemd-flag-616000/config.json: {Name:mkcb9dac14ce2ed453648457a4f7cf63bf377d90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:34:48.037805    3911 start.go:365] acquiring machines lock for force-systemd-flag-616000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:48.037837    3911 start.go:369] acquired machines lock for "force-systemd-flag-616000" in 22.791µs
	I1003 17:34:48.037848    3911 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:48.037879    3911 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:48.046439    3911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:48.062785    3911 start.go:159] libmachine.API.Create for "force-systemd-flag-616000" (driver="qemu2")
	I1003 17:34:48.062810    3911 client.go:168] LocalClient.Create starting
	I1003 17:34:48.062870    3911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:48.062893    3911 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:48.062903    3911 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:48.062934    3911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:48.062951    3911 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:48.062959    3911 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:48.063257    3911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:48.174411    3911 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:48.335617    3911 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:48.335627    3911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:48.335804    3911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1003 17:34:48.344865    3911 main.go:141] libmachine: STDOUT: 
	I1003 17:34:48.344889    3911 main.go:141] libmachine: STDERR: 
	I1003 17:34:48.344940    3911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2 +20000M
	I1003 17:34:48.352511    3911 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:48.352525    3911 main.go:141] libmachine: STDERR: 
	I1003 17:34:48.352542    3911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1003 17:34:48.352548    3911 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:48.352589    3911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e9:0c:19:0e:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1003 17:34:48.354196    3911 main.go:141] libmachine: STDOUT: 
	I1003 17:34:48.354211    3911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:48.354231    3911 client.go:171] LocalClient.Create took 291.421333ms
	I1003 17:34:50.356432    3911 start.go:128] duration metric: createHost completed in 2.31858125s
	I1003 17:34:50.356491    3911 start.go:83] releasing machines lock for "force-systemd-flag-616000", held for 2.318689042s
	W1003 17:34:50.356542    3911 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:50.376570    3911 out.go:177] * Deleting "force-systemd-flag-616000" in qemu2 ...
	W1003 17:34:50.391686    3911 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:50.391745    3911 start.go:703] Will try again in 5 seconds ...
	I1003 17:34:55.393874    3911 start.go:365] acquiring machines lock for force-systemd-flag-616000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:55.394240    3911 start.go:369] acquired machines lock for "force-systemd-flag-616000" in 247.917µs
	I1003 17:34:55.394354    3911 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:55.394613    3911 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:55.403566    3911 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:55.451562    3911 start.go:159] libmachine.API.Create for "force-systemd-flag-616000" (driver="qemu2")
	I1003 17:34:55.451596    3911 client.go:168] LocalClient.Create starting
	I1003 17:34:55.451714    3911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:55.451764    3911 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:55.451784    3911 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:55.451851    3911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:55.451885    3911 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:55.451901    3911 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:55.452519    3911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:55.575860    3911 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:55.658361    3911 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:55.658367    3911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:55.658527    3911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1003 17:34:55.667293    3911 main.go:141] libmachine: STDOUT: 
	I1003 17:34:55.667310    3911 main.go:141] libmachine: STDERR: 
	I1003 17:34:55.667366    3911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2 +20000M
	I1003 17:34:55.674877    3911 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:55.674897    3911 main.go:141] libmachine: STDERR: 
	I1003 17:34:55.674911    3911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1003 17:34:55.674917    3911 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:55.674961    3911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:48:bb:ea:98:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1003 17:34:55.676619    3911 main.go:141] libmachine: STDOUT: 
	I1003 17:34:55.676631    3911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:55.676643    3911 client.go:171] LocalClient.Create took 225.046583ms
	I1003 17:34:57.678785    3911 start.go:128] duration metric: createHost completed in 2.284163208s
	I1003 17:34:57.678855    3911 start.go:83] releasing machines lock for "force-systemd-flag-616000", held for 2.284634542s
	W1003 17:34:57.679345    3911 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:57.687956    3911 out.go:177] 
	W1003 17:34:57.693240    3911 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:34:57.693281    3911 out.go:239] * 
	* 
	W1003 17:34:57.695918    3911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:34:57.706932    3911 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-616000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-616000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-616000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.861125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-616000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-616000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-10-03 17:34:57.798046 -0700 PDT m=+1897.021060334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-616000 -n force-systemd-flag-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-616000 -n force-systemd-flag-616000: exit status 7 (33.506792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-616000
--- FAIL: TestForceSystemdFlag (10.01s)

                                                
                                    
x
+
TestForceSystemdEnv (10.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-769000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-769000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.41308775s)

                                                
                                                
-- stdout --
	* [force-systemd-env-769000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-769000 in cluster force-systemd-env-769000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:34:42.410185    3876 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:34:42.410404    3876 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:42.410407    3876 out.go:309] Setting ErrFile to fd 2...
	I1003 17:34:42.410410    3876 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:42.410539    3876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:34:42.411532    3876 out.go:303] Setting JSON to false
	I1003 17:34:42.427586    3876 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2056,"bootTime":1696377626,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:34:42.427674    3876 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:34:42.432132    3876 out.go:177] * [force-systemd-env-769000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:34:42.440116    3876 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:34:42.445043    3876 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:34:42.440209    3876 notify.go:220] Checking for updates...
	I1003 17:34:42.450062    3876 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:34:42.453156    3876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:34:42.456097    3876 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:34:42.459032    3876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1003 17:34:42.462485    3876 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:42.462528    3876 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:34:42.467054    3876 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:34:42.474114    3876 start.go:298] selected driver: qemu2
	I1003 17:34:42.474123    3876 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:34:42.474129    3876 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:34:42.476519    3876 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:34:42.479097    3876 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:34:42.482126    3876 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:34:42.482148    3876 cni.go:84] Creating CNI manager for ""
	I1003 17:34:42.482159    3876 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:34:42.482163    3876 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:34:42.482179    3876 start_flags.go:321] config:
	{Name:force-systemd-env-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:34:42.486771    3876 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:42.493938    3876 out.go:177] * Starting control plane node force-systemd-env-769000 in cluster force-systemd-env-769000
	I1003 17:34:42.498110    3876 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:34:42.498126    3876 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:34:42.498143    3876 cache.go:57] Caching tarball of preloaded images
	I1003 17:34:42.498197    3876 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:34:42.498202    3876 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:34:42.498274    3876 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/force-systemd-env-769000/config.json ...
	I1003 17:34:42.498285    3876 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/force-systemd-env-769000/config.json: {Name:mk7a2b2b7922d05a5478f209501d88a691907f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:34:42.498493    3876 start.go:365] acquiring machines lock for force-systemd-env-769000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:42.498527    3876 start.go:369] acquired machines lock for "force-systemd-env-769000" in 26.917µs
	I1003 17:34:42.498538    3876 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:42.498563    3876 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:42.500603    3876 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:42.515958    3876 start.go:159] libmachine.API.Create for "force-systemd-env-769000" (driver="qemu2")
	I1003 17:34:42.515983    3876 client.go:168] LocalClient.Create starting
	I1003 17:34:42.516038    3876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:42.516064    3876 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:42.516075    3876 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:42.516110    3876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:42.516127    3876 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:42.516135    3876 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:42.516445    3876 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:42.628317    3876 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:42.697819    3876 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:42.697825    3876 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:42.697990    3876 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2
	I1003 17:34:42.706632    3876 main.go:141] libmachine: STDOUT: 
	I1003 17:34:42.706647    3876 main.go:141] libmachine: STDERR: 
	I1003 17:34:42.706709    3876 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2 +20000M
	I1003 17:34:42.714103    3876 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:42.714119    3876 main.go:141] libmachine: STDERR: 
	I1003 17:34:42.714318    3876 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2
	I1003 17:34:42.714328    3876 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:42.715014    3876 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:e5:62:4c:92:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2
	I1003 17:34:42.716992    3876 main.go:141] libmachine: STDOUT: 
	I1003 17:34:42.717013    3876 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:42.717033    3876 client.go:171] LocalClient.Create took 201.049333ms
	I1003 17:34:44.719077    3876 start.go:128] duration metric: createHost completed in 2.220548s
	I1003 17:34:44.719096    3876 start.go:83] releasing machines lock for "force-systemd-env-769000", held for 2.220607917s
	W1003 17:34:44.719111    3876 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:44.727677    3876 out.go:177] * Deleting "force-systemd-env-769000" in qemu2 ...
	W1003 17:34:44.735784    3876 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:44.735797    3876 start.go:703] Will try again in 5 seconds ...
	I1003 17:34:49.738000    3876 start.go:365] acquiring machines lock for force-systemd-env-769000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:50.356625    3876 start.go:369] acquired machines lock for "force-systemd-env-769000" in 618.532584ms
	I1003 17:34:50.356802    3876 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-769000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-769000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:50.357031    3876 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:50.368509    3876 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 17:34:50.414674    3876 start.go:159] libmachine.API.Create for "force-systemd-env-769000" (driver="qemu2")
	I1003 17:34:50.414717    3876 client.go:168] LocalClient.Create starting
	I1003 17:34:50.414849    3876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:50.414894    3876 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:50.414918    3876 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:50.414978    3876 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:50.415012    3876 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:50.415030    3876 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:50.415541    3876 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:50.537779    3876 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:50.730728    3876 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:50.730735    3876 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:50.730925    3876 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2
	I1003 17:34:50.740053    3876 main.go:141] libmachine: STDOUT: 
	I1003 17:34:50.740067    3876 main.go:141] libmachine: STDERR: 
	I1003 17:34:50.740141    3876 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2 +20000M
	I1003 17:34:50.747603    3876 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:50.747615    3876 main.go:141] libmachine: STDERR: 
	I1003 17:34:50.747630    3876 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2
	I1003 17:34:50.747636    3876 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:50.747677    3876 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:15:07:7d:b9:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/force-systemd-env-769000/disk.qcow2
	I1003 17:34:50.749253    3876 main.go:141] libmachine: STDOUT: 
	I1003 17:34:50.749267    3876 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:50.749278    3876 client.go:171] LocalClient.Create took 334.561459ms
	I1003 17:34:52.751447    3876 start.go:128] duration metric: createHost completed in 2.394422292s
	I1003 17:34:52.751533    3876 start.go:83] releasing machines lock for "force-systemd-env-769000", held for 2.394912208s
	W1003 17:34:52.751919    3876 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:52.760551    3876 out.go:177] 
	W1003 17:34:52.764635    3876 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:34:52.764693    3876 out.go:239] * 
	* 
	W1003 17:34:52.767494    3876 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:34:52.780491    3876 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-769000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-769000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-769000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.913583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-769000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-769000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-10-03 17:34:52.877289 -0700 PDT m=+1892.100208084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-769000 -n force-systemd-env-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-769000 -n force-systemd-env-769000: exit status 7 (33.781167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-769000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-769000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-769000
--- FAIL: TestForceSystemdEnv (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-488000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-488000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-nzszr" [edc365f6-d551-4665-af2f-225bc6837e9e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-nzszr" [edc365f6-d551-4665-af2f-225bc6837e9e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.008402875s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:30271
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
E1003 17:26:05.563399    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:30271: Get "http://192.168.105.4:30271": dial tcp 192.168.105.4:30271: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-488000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-nzszr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-488000/192.168.105.4
Start Time:       Tue, 03 Oct 2023 17:25:48 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://eee59ec12de82005d72b576162cfd1555f6e7abd6052bbe8055ade7b753afe53
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 03 Oct 2023 17:26:10 -0700
Finished:     Tue, 03 Oct 2023 17:26:10 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5xbnd (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-5xbnd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  40s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-nzszr to functional-488000
Normal   Pulling    40s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     36s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.289s (4.289s including waiting)
Normal   Created    18s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    18s (x3 over 35s)  kubelet            Started container echoserver-arm
Normal   Pulled     18s (x2 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x4 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-nzszr_default(edc365f6-d551-4665-af2f-225bc6837e9e)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-488000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-488000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.206.87
IPs:                      10.103.206.87
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30271/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-488000 -n functional-488000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-488000                                                                                                 | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2794950848/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh -- ls                                                                                          | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh sudo                                                                                           | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-488000                                                                                                 | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-488000                                                                                                 | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-488000                                                                                                 | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-488000 ssh findmnt                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| start     | -p functional-488000                                                                                                 | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-488000                                                                                                 | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-488000 --dry-run                                                                                       | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|           | -p functional-488000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:26:25
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:26:25.616653    2628 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:26:25.616783    2628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:25.616786    2628 out.go:309] Setting ErrFile to fd 2...
	I1003 17:26:25.616789    2628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:25.616921    2628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:26:25.617991    2628 out.go:303] Setting JSON to false
	I1003 17:26:25.634203    2628 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1559,"bootTime":1696377626,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:26:25.634275    2628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:26:25.637920    2628 out.go:177] * [functional-488000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:26:25.644986    2628 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:26:25.647888    2628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:26:25.645096    2628 notify.go:220] Checking for updates...
	I1003 17:26:25.653981    2628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:26:25.655323    2628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:26:25.657998    2628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:26:25.660979    2628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:26:25.664319    2628 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:26:25.664559    2628 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:26:25.668949    2628 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:26:25.675767    2628 start.go:298] selected driver: qemu2
	I1003 17:26:25.675773    2628 start.go:902] validating driver "qemu2" against &{Name:functional-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-488000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:26:25.675821    2628 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:26:25.677953    2628 cni.go:84] Creating CNI manager for ""
	I1003 17:26:25.677966    2628 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:26:25.677970    2628 start_flags.go:321] config:
	{Name:functional-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-488000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:26:25.688985    2628 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-04 00:23:39 UTC, ends at Wed 2023-10-04 00:26:28 UTC. --
	Oct 04 00:26:12 functional-488000 dockerd[6665]: time="2023-10-04T00:26:12.377169528Z" level=warning msg="cleaning up after shim disconnected" id=ac4df190eca9d6a9e5c4eb9d887697137f8cb5f92f96435de54a3a48b02cf519 namespace=moby
	Oct 04 00:26:12 functional-488000 dockerd[6665]: time="2023-10-04T00:26:12.377173861Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:26:13 functional-488000 dockerd[6659]: time="2023-10-04T00:26:13.542103498Z" level=info msg="ignoring event" container=085501f08b95a945940acc7351b2bbd367d5741bb8d69ac0810163844a8ee2cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:26:13 functional-488000 dockerd[6665]: time="2023-10-04T00:26:13.542570073Z" level=info msg="shim disconnected" id=085501f08b95a945940acc7351b2bbd367d5741bb8d69ac0810163844a8ee2cb namespace=moby
	Oct 04 00:26:13 functional-488000 dockerd[6665]: time="2023-10-04T00:26:13.542596364Z" level=warning msg="cleaning up after shim disconnected" id=085501f08b95a945940acc7351b2bbd367d5741bb8d69ac0810163844a8ee2cb namespace=moby
	Oct 04 00:26:13 functional-488000 dockerd[6665]: time="2023-10-04T00:26:13.542600364Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.067437770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.067470853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.067482645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.067488936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.093080806Z" level=info msg="shim disconnected" id=6d29eb0846b0dc90b4fb3db1528deabd56c6f69e8d7e8f0afc20536b92d974c1 namespace=moby
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.093108681Z" level=warning msg="cleaning up after shim disconnected" id=6d29eb0846b0dc90b4fb3db1528deabd56c6f69e8d7e8f0afc20536b92d974c1 namespace=moby
	Oct 04 00:26:18 functional-488000 dockerd[6665]: time="2023-10-04T00:26:18.093113514Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:26:18 functional-488000 dockerd[6659]: time="2023-10-04T00:26:18.093188263Z" level=info msg="ignoring event" container=6d29eb0846b0dc90b4fb3db1528deabd56c6f69e8d7e8f0afc20536b92d974c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.586220880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.586470877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.586522001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.586547626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.602215879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.602297420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.602326211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:26:26 functional-488000 dockerd[6665]: time="2023-10-04T00:26:26.602349002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:26:26 functional-488000 cri-dockerd[6925]: time="2023-10-04T00:26:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28a8488d7901b995dec27a37e4cdc0a23484ab2753a96d08821e0d47bb26ffad/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 00:26:26 functional-488000 cri-dockerd[6925]: time="2023-10-04T00:26:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/37609cf9038b9952b46acd601efc071dff86b5d871c936d5a27b0adf6d0c3895/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 00:26:26 functional-488000 dockerd[6659]: time="2023-10-04T00:26:26.957223514Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6d29eb0846b0d       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            2                   54ebc8bc55b9c       hello-node-759d89bdcc-w8tqc
	ac4df190eca9d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   16 seconds ago       Exited              mount-munger              0                   085501f08b95a       busybox-mount
	eee59ec12de82       72565bf5bbedf                                                                                         18 seconds ago       Exited              echoserver-arm            2                   84b91255fa197       hello-node-connect-7799dfb7c6-nzszr
	9c731dc7edee9       nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755                         33 seconds ago       Running             myfrontend                0                   030555e4c8df3       sp-pod
	49edc58270796       nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                         47 seconds ago       Running             nginx                     0                   30740eb316856       nginx-svc
	330498b12539f       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   9d03114393694       coredns-5dd5756b68-hb78x
	2893cf6a6b373       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       1                   8c4e9408f4961       storage-provisioner
	9f05e1e6df36d       7da62c127fc0f                                                                                         About a minute ago   Running             kube-proxy                2                   427bc8904deba       kube-proxy-9zqjs
	6f0962fc282a5       64fc40cee3716                                                                                         About a minute ago   Running             kube-scheduler            2                   de7a9a2c639d8       kube-scheduler-functional-488000
	0a51a1e9907b9       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   8d379c382e38f       etcd-functional-488000
	684a676839726       89d57b83c1786                                                                                         About a minute ago   Running             kube-controller-manager   2                   48988aff410d5       kube-controller-manager-functional-488000
	46a3c60ab8b36       30bb499447fe1                                                                                         About a minute ago   Running             kube-apiserver            0                   7e52353c3129c       kube-apiserver-functional-488000
	15cf0e45db248       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       0                   a9700eae13bff       storage-provisioner
	6821a487dcbfb       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   1bfdb3ce9ffb4       coredns-5dd5756b68-hb78x
	f552969b825a2       64fc40cee3716                                                                                         2 minutes ago        Exited              kube-scheduler            1                   825f8d926a22d       kube-scheduler-functional-488000
	b15e1c0913c9b       7da62c127fc0f                                                                                         2 minutes ago        Exited              kube-proxy                1                   aef5f912d5476       kube-proxy-9zqjs
	6e23bee0f3942       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   2756fbccad3ec       etcd-functional-488000
	8a0becd5aeb11       89d57b83c1786                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   77e6fde40619a       kube-controller-manager-functional-488000
	
	* 
	* ==> coredns [330498b12539] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47766 - 56091 "HINFO IN 9101944474436276597.2269558655315545093. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005079164s
	[INFO] 10.244.0.1:5889 - 53710 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000103246s
	[INFO] 10.244.0.1:51820 - 48994 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096121s
	[INFO] 10.244.0.1:4099 - 50323 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001018291s
	[INFO] 10.244.0.1:40983 - 2650 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000128537s
	[INFO] 10.244.0.1:14401 - 12803 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000065122s
	[INFO] 10.244.0.1:5849 - 10276 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000088538s
	
	* 
	* ==> coredns [6821a487dcbf] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33693 - 10144 "HINFO IN 4777986893150462222.2507666859569676209. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004221019s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-488000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-488000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a
	                    minikube.k8s.io/name=functional-488000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_03T17_23_56_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-488000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:26:13 +0000   Wed, 04 Oct 2023 00:23:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:26:13 +0000   Wed, 04 Oct 2023 00:23:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:26:13 +0000   Wed, 04 Oct 2023 00:23:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 00:26:13 +0000   Wed, 04 Oct 2023 00:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-488000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 83731bd4c6d145e2ae495e64303694d6
	  System UUID:                83731bd4c6d145e2ae495e64303694d6
	  Boot ID:                    05a53b02-4e4e-4789-aec0-a891727ae8e2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-w8tqc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     hello-node-connect-7799dfb7c6-nzszr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 coredns-5dd5756b68-hb78x                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m19s
	  kube-system                 etcd-functional-488000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-apiserver-functional-488000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-functional-488000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-proxy-9zqjs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-scheduler-functional-488000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-fms84    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-45lvq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m18s              kube-proxy       
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 2m32s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m32s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m32s              kubelet          Node functional-488000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m32s              kubelet          Node functional-488000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s              kubelet          Node functional-488000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m28s              kubelet          Node functional-488000 status is now: NodeReady
	  Normal  RegisteredNode           2m20s              node-controller  Node functional-488000 event: Registered Node functional-488000 in Controller
	  Normal  RegisteredNode           105s               node-controller  Node functional-488000 event: Registered Node functional-488000 in Controller
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node functional-488000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node functional-488000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node functional-488000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                node-controller  Node functional-488000 event: Registered Node functional-488000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.131866] systemd-fstab-generator[3736]: Ignoring "noauto" for root device
	[  +0.086160] systemd-fstab-generator[3747]: Ignoring "noauto" for root device
	[  +0.086763] systemd-fstab-generator[3760]: Ignoring "noauto" for root device
	[  +5.233037] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.312022] systemd-fstab-generator[4260]: Ignoring "noauto" for root device
	[  +0.063103] systemd-fstab-generator[4271]: Ignoring "noauto" for root device
	[  +0.065520] systemd-fstab-generator[4282]: Ignoring "noauto" for root device
	[  +0.077770] systemd-fstab-generator[4342]: Ignoring "noauto" for root device
	[  +0.073448] systemd-fstab-generator[4374]: Ignoring "noauto" for root device
	[  +4.668218] kauditd_printk_skb: 29 callbacks suppressed
	[ +26.474257] systemd-fstab-generator[6199]: Ignoring "noauto" for root device
	[  +0.124398] systemd-fstab-generator[6231]: Ignoring "noauto" for root device
	[  +0.079412] systemd-fstab-generator[6242]: Ignoring "noauto" for root device
	[  +0.088904] systemd-fstab-generator[6255]: Ignoring "noauto" for root device
	[Oct 4 00:25] systemd-fstab-generator[6811]: Ignoring "noauto" for root device
	[  +0.062853] systemd-fstab-generator[6822]: Ignoring "noauto" for root device
	[  +0.070776] systemd-fstab-generator[6833]: Ignoring "noauto" for root device
	[  +0.068457] systemd-fstab-generator[6844]: Ignoring "noauto" for root device
	[  +0.083860] systemd-fstab-generator[6918]: Ignoring "noauto" for root device
	[  +1.285375] systemd-fstab-generator[7172]: Ignoring "noauto" for root device
	[  +3.570291] kauditd_printk_skb: 29 callbacks suppressed
	[ +25.780359] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.006395] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.849986] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Oct 4 00:26] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [0a51a1e9907b] <==
	* {"level":"info","ts":"2023-10-04T00:25:09.912964Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T00:25:09.912989Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T00:25:09.913064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-10-04T00:25:09.913134Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-10-04T00:25:09.913194Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:25:09.913229Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:25:09.913765Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-04T00:25:09.926007Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-04T00:25:09.926061Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-04T00:25:09.926433Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T00:25:09.926479Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T00:25:11.277462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-04T00:25:11.277595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-04T00:25:11.277666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-04T00:25:11.277702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-10-04T00:25:11.277723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-10-04T00:25:11.277748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-10-04T00:25:11.277785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-10-04T00:25:11.280878Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-488000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T00:25:11.28093Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:25:11.280882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:25:11.283214Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-10-04T00:25:11.283297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T00:25:11.308624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T00:25:11.308787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [6e23bee0f394] <==
	* {"level":"info","ts":"2023-10-04T00:24:29.225487Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T00:24:30.30284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T00:24:30.302983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T00:24:30.303032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-10-04T00:24:30.30307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T00:24:30.303088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-04T00:24:30.303134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-10-04T00:24:30.303183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-10-04T00:24:30.308893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-488000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T00:24:30.309375Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:24:30.309696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:24:30.312267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T00:24:30.312337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-10-04T00:24:30.312458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T00:24:30.312482Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T00:24:55.983466Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-04T00:24:55.983497Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-488000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-10-04T00:24:55.983532Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T00:24:55.983572Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T00:24:55.992057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T00:24:55.992074Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-04T00:24:55.992095Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-10-04T00:24:55.993313Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-04T00:24:55.993348Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-10-04T00:24:55.993357Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-488000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  00:26:28 up 2 min,  0 users,  load average: 0.62, 0.32, 0.12
	Linux functional-488000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [46a3c60ab8b3] <==
	* I1004 00:25:11.960537       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1004 00:25:11.960547       1 shared_informer.go:318] Caches are synced for configmaps
	I1004 00:25:11.960559       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1004 00:25:11.962429       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 00:25:11.968530       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1004 00:25:11.968543       1 aggregator.go:166] initial CRD sync complete...
	I1004 00:25:11.968546       1 autoregister_controller.go:141] Starting autoregister controller
	I1004 00:25:11.968574       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 00:25:11.968580       1 cache.go:39] Caches are synced for autoregister controller
	I1004 00:25:12.861122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 00:25:12.990647       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1004 00:25:12.994154       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1004 00:25:13.005064       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1004 00:25:13.012364       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 00:25:13.014492       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 00:25:24.118554       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 00:25:24.127863       1 controller.go:624] quota admission added evaluator for: endpoints
	I1004 00:25:32.950191       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.236.25"}
	I1004 00:25:37.816856       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.176.75"}
	I1004 00:25:48.203609       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1004 00:25:48.245457       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.206.87"}
	I1004 00:26:02.608743       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.192.134"}
	I1004 00:26:26.138340       1 controller.go:624] quota admission added evaluator for: namespaces
	I1004 00:26:26.229368       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.200.101"}
	I1004 00:26:26.238002       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.124.72"}
	
	* 
	* ==> kube-controller-manager [684a67683972] <==
	* E1004 00:26:26.200487       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1004 00:26:26.200581       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1004 00:26:26.200596       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1004 00:26:26.205509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.916802ms"
	E1004 00:26:26.205524       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1004 00:26:26.208406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.905272ms"
	E1004 00:26:26.208481       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1004 00:26:26.208416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.879296ms"
	E1004 00:26:26.208674       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1004 00:26:26.208429       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1004 00:26:26.208694       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1004 00:26:26.213785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.126389ms"
	E1004 00:26:26.213800       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1004 00:26:26.213880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.32647ms"
	E1004 00:26:26.213888       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1004 00:26:26.214003       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1004 00:26:26.214016       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1004 00:26:26.237803       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-45lvq"
	I1004 00:26:26.255874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.053807ms"
	I1004 00:26:26.263064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.164406ms"
	I1004 00:26:26.263101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.042µs"
	I1004 00:26:26.263782       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-fms84"
	I1004 00:26:26.274993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="19.017251ms"
	I1004 00:26:26.289345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="14.312146ms"
	I1004 00:26:26.289468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="32.541µs"
	
	* 
	* ==> kube-controller-manager [8a0becd5aeb1] <==
	* I1004 00:24:43.356061       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1004 00:24:43.355986       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1004 00:24:43.356041       1 event.go:307] "Event occurred" object="functional-488000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-488000 event: Registered Node functional-488000 in Controller"
	I1004 00:24:43.356300       1 taint_manager.go:211] "Sending events to api server"
	I1004 00:24:43.359500       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1004 00:24:43.359652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.085µs"
	I1004 00:24:43.359707       1 shared_informer.go:318] Caches are synced for crt configmap
	I1004 00:24:43.359758       1 shared_informer.go:318] Caches are synced for endpoint
	I1004 00:24:43.359797       1 shared_informer.go:318] Caches are synced for GC
	I1004 00:24:43.359963       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1004 00:24:43.364988       1 shared_informer.go:318] Caches are synced for daemon sets
	I1004 00:24:43.371196       1 shared_informer.go:318] Caches are synced for HPA
	I1004 00:24:43.372304       1 shared_informer.go:318] Caches are synced for cronjob
	I1004 00:24:43.373395       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1004 00:24:43.373412       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1004 00:24:43.374508       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1004 00:24:43.374518       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1004 00:24:43.376668       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1004 00:24:43.396968       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1004 00:24:43.449551       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 00:24:43.499824       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 00:24:43.551194       1 shared_informer.go:318] Caches are synced for attach detach
	I1004 00:24:43.912523       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 00:24:43.962530       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 00:24:43.962553       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [9f05e1e6df36] <==
	* I1004 00:25:12.564071       1 server_others.go:69] "Using iptables proxy"
	I1004 00:25:12.570886       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1004 00:25:12.593631       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 00:25:12.593643       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 00:25:12.594338       1 server_others.go:152] "Using iptables Proxier"
	I1004 00:25:12.594356       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 00:25:12.594435       1 server.go:846] "Version info" version="v1.28.2"
	I1004 00:25:12.594439       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:25:12.601027       1 config.go:188] "Starting service config controller"
	I1004 00:25:12.601085       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 00:25:12.601115       1 config.go:97] "Starting endpoint slice config controller"
	I1004 00:25:12.604045       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 00:25:12.604254       1 config.go:315] "Starting node config controller"
	I1004 00:25:12.604287       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 00:25:12.701924       1 shared_informer.go:318] Caches are synced for service config
	I1004 00:25:12.705103       1 shared_informer.go:318] Caches are synced for node config
	I1004 00:25:12.705111       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [b15e1c0913c9] <==
	* I1004 00:24:29.840867       1 server_others.go:69] "Using iptables proxy"
	E1004 00:24:30.943865       1 node.go:130] Failed to retrieve node info: nodes "functional-488000" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I1004 00:24:32.091260       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1004 00:24:32.099898       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 00:24:32.099912       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 00:24:32.100526       1 server_others.go:152] "Using iptables Proxier"
	I1004 00:24:32.100552       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 00:24:32.100632       1 server.go:846] "Version info" version="v1.28.2"
	I1004 00:24:32.100637       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:24:32.100917       1 config.go:188] "Starting service config controller"
	I1004 00:24:32.100925       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 00:24:32.100932       1 config.go:97] "Starting endpoint slice config controller"
	I1004 00:24:32.100934       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 00:24:32.101501       1 config.go:315] "Starting node config controller"
	I1004 00:24:32.101504       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 00:24:32.201583       1 shared_informer.go:318] Caches are synced for node config
	I1004 00:24:32.201585       1 shared_informer.go:318] Caches are synced for service config
	I1004 00:24:32.201600       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [6f0962fc282a] <==
	* I1004 00:25:10.236294       1 serving.go:348] Generated self-signed cert in-memory
	W1004 00:25:11.897235       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 00:25:11.897244       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 00:25:11.897249       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 00:25:11.897251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 00:25:11.933677       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1004 00:25:11.933692       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:25:11.934599       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 00:25:11.934648       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 00:25:11.934657       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 00:25:11.934693       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 00:25:12.036784       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f552969b825a] <==
	* E1004 00:24:30.937445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 00:24:30.937479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 00:24:30.937486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 00:24:30.940968       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 00:24:30.940985       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 00:24:30.941137       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 00:24:30.941163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 00:24:30.941146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:24:30.941202       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 00:24:30.941241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:24:30.941281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 00:24:30.941281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:24:30.941291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 00:24:30.941318       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 00:24:30.941327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 00:24:30.941357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:24:30.941371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 00:24:30.941362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 00:24:30.941385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 00:24:30.941407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:24:30.941415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1004 00:24:32.332763       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 00:24:55.979062       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1004 00:24:55.979083       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1004 00:24:55.979141       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:23:39 UTC, ends at Wed 2023-10-04 00:26:29 UTC. --
	Oct 04 00:26:10 functional-488000 kubelet[7178]: E1004 00:26:10.470285    7178 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-nzszr_default(edc365f6-d551-4665-af2f-225bc6837e9e)\"" pod="default/hello-node-connect-7799dfb7c6-nzszr" podUID="edc365f6-d551-4665-af2f-225bc6837e9e"
	Oct 04 00:26:10 functional-488000 kubelet[7178]: I1004 00:26:10.791033    7178 topology_manager.go:215] "Topology Admit Handler" podUID="ef8e1b32-a71d-465d-98ee-46e71af8a8a0" podNamespace="default" podName="busybox-mount"
	Oct 04 00:26:10 functional-488000 kubelet[7178]: I1004 00:26:10.983994    7178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzn6x\" (UniqueName: \"kubernetes.io/projected/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-kube-api-access-fzn6x\") pod \"busybox-mount\" (UID: \"ef8e1b32-a71d-465d-98ee-46e71af8a8a0\") " pod="default/busybox-mount"
	Oct 04 00:26:10 functional-488000 kubelet[7178]: I1004 00:26:10.984047    7178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-test-volume\") pod \"busybox-mount\" (UID: \"ef8e1b32-a71d-465d-98ee-46e71af8a8a0\") " pod="default/busybox-mount"
	Oct 04 00:26:13 functional-488000 kubelet[7178]: I1004 00:26:13.701225    7178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzn6x\" (UniqueName: \"kubernetes.io/projected/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-kube-api-access-fzn6x\") pod \"ef8e1b32-a71d-465d-98ee-46e71af8a8a0\" (UID: \"ef8e1b32-a71d-465d-98ee-46e71af8a8a0\") "
	Oct 04 00:26:13 functional-488000 kubelet[7178]: I1004 00:26:13.701263    7178 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-test-volume\") pod \"ef8e1b32-a71d-465d-98ee-46e71af8a8a0\" (UID: \"ef8e1b32-a71d-465d-98ee-46e71af8a8a0\") "
	Oct 04 00:26:13 functional-488000 kubelet[7178]: I1004 00:26:13.701293    7178 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-test-volume" (OuterVolumeSpecName: "test-volume") pod "ef8e1b32-a71d-465d-98ee-46e71af8a8a0" (UID: "ef8e1b32-a71d-465d-98ee-46e71af8a8a0"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 04 00:26:13 functional-488000 kubelet[7178]: I1004 00:26:13.704008    7178 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-kube-api-access-fzn6x" (OuterVolumeSpecName: "kube-api-access-fzn6x") pod "ef8e1b32-a71d-465d-98ee-46e71af8a8a0" (UID: "ef8e1b32-a71d-465d-98ee-46e71af8a8a0"). InnerVolumeSpecName "kube-api-access-fzn6x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 00:26:13 functional-488000 kubelet[7178]: I1004 00:26:13.801337    7178 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-test-volume\") on node \"functional-488000\" DevicePath \"\""
	Oct 04 00:26:13 functional-488000 kubelet[7178]: I1004 00:26:13.801359    7178 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fzn6x\" (UniqueName: \"kubernetes.io/projected/ef8e1b32-a71d-465d-98ee-46e71af8a8a0-kube-api-access-fzn6x\") on node \"functional-488000\" DevicePath \"\""
	Oct 04 00:26:14 functional-488000 kubelet[7178]: I1004 00:26:14.497185    7178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="085501f08b95a945940acc7351b2bbd367d5741bb8d69ac0810163844a8ee2cb"
	Oct 04 00:26:18 functional-488000 kubelet[7178]: I1004 00:26:18.035336    7178 scope.go:117] "RemoveContainer" containerID="1759fa87cf1a4389c8d6b3d58ed81910ebe881a57b85b3cf879595699ba37f59"
	Oct 04 00:26:18 functional-488000 kubelet[7178]: I1004 00:26:18.519729    7178 scope.go:117] "RemoveContainer" containerID="1759fa87cf1a4389c8d6b3d58ed81910ebe881a57b85b3cf879595699ba37f59"
	Oct 04 00:26:18 functional-488000 kubelet[7178]: I1004 00:26:18.519947    7178 scope.go:117] "RemoveContainer" containerID="6d29eb0846b0dc90b4fb3db1528deabd56c6f69e8d7e8f0afc20536b92d974c1"
	Oct 04 00:26:18 functional-488000 kubelet[7178]: E1004 00:26:18.520034    7178 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-w8tqc_default(8d75cba7-a791-4207-8484-c811058cc91b)\"" pod="default/hello-node-759d89bdcc-w8tqc" podUID="8d75cba7-a791-4207-8484-c811058cc91b"
	Oct 04 00:26:21 functional-488000 kubelet[7178]: I1004 00:26:21.035299    7178 scope.go:117] "RemoveContainer" containerID="eee59ec12de82005d72b576162cfd1555f6e7abd6052bbe8055ade7b753afe53"
	Oct 04 00:26:21 functional-488000 kubelet[7178]: E1004 00:26:21.035389    7178 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-nzszr_default(edc365f6-d551-4665-af2f-225bc6837e9e)\"" pod="default/hello-node-connect-7799dfb7c6-nzszr" podUID="edc365f6-d551-4665-af2f-225bc6837e9e"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.241401    7178 topology_manager.go:215] "Topology Admit Handler" podUID="91622edc-fa8c-4c11-aaa4-e2aa715cd423" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-45lvq"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: E1004 00:26:26.241470    7178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8e1b32-a71d-465d-98ee-46e71af8a8a0" containerName="mount-munger"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.241486    7178 memory_manager.go:346] "RemoveStaleState removing state" podUID="ef8e1b32-a71d-465d-98ee-46e71af8a8a0" containerName="mount-munger"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.269121    7178 topology_manager.go:215] "Topology Admit Handler" podUID="52fe2dd3-921c-4fe7-944e-089ff8f0c843" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-fms84"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.375875    7178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgmw\" (UniqueName: \"kubernetes.io/projected/91622edc-fa8c-4c11-aaa4-e2aa715cd423-kube-api-access-mtgmw\") pod \"kubernetes-dashboard-8694d4445c-45lvq\" (UID: \"91622edc-fa8c-4c11-aaa4-e2aa715cd423\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45lvq"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.375899    7178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91622edc-fa8c-4c11-aaa4-e2aa715cd423-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-45lvq\" (UID: \"91622edc-fa8c-4c11-aaa4-e2aa715cd423\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45lvq"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.375910    7178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52fe2dd3-921c-4fe7-944e-089ff8f0c843-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-fms84\" (UID: \"52fe2dd3-921c-4fe7-944e-089ff8f0c843\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-fms84"
	Oct 04 00:26:26 functional-488000 kubelet[7178]: I1004 00:26:26.375933    7178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvtxc\" (UniqueName: \"kubernetes.io/projected/52fe2dd3-921c-4fe7-944e-089ff8f0c843-kube-api-access-zvtxc\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-fms84\" (UID: \"52fe2dd3-921c-4fe7-944e-089ff8f0c843\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-fms84"
	
	* 
	* ==> storage-provisioner [15cf0e45db24] <==
	* I1004 00:24:42.507499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:24:42.512430       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:24:42.512448       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:24:42.515024       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:24:42.515151       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-488000_eb40320f-bebe-4d23-a477-25d0dbdeaaf9!
	I1004 00:24:42.515414       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"256db409-305f-4146-9485-b9c1c36e67dd", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-488000_eb40320f-bebe-4d23-a477-25d0dbdeaaf9 became leader
	I1004 00:24:42.615332       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-488000_eb40320f-bebe-4d23-a477-25d0dbdeaaf9!
	
	* 
	* ==> storage-provisioner [2893cf6a6b37] <==
	* I1004 00:25:12.614153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:25:12.617788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:25:12.617830       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:25:30.001091       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:25:30.001148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-488000_9171e651-c266-4573-b5a5-24bd77e4ef15!
	I1004 00:25:30.001173       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"256db409-305f-4146-9485-b9c1c36e67dd", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-488000_9171e651-c266-4573-b5a5-24bd77e4ef15 became leader
	I1004 00:25:30.102259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-488000_9171e651-c266-4573-b5a5-24bd77e4ef15!
	I1004 00:25:42.704861       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1004 00:25:42.704885       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3e9b4d2a-432e-4e42-8926-f1b763f33999 347 0 2023-10-04 00:24:10 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-10-04 00:24:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-931a4cdc-5524-4d3d-aaec-715ddc68fc61 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  931a4cdc-5524-4d3d-aaec-715ddc68fc61 623 0 2023-10-04 00:25:42 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-10-04 00:25:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-10-04 00:25:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1004 00:25:42.705333       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"931a4cdc-5524-4d3d-aaec-715ddc68fc61", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1004 00:25:42.705453       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-931a4cdc-5524-4d3d-aaec-715ddc68fc61" provisioned
	I1004 00:25:42.705460       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1004 00:25:42.705463       1 volume_store.go:212] Trying to save persistentvolume "pvc-931a4cdc-5524-4d3d-aaec-715ddc68fc61"
	I1004 00:25:42.713625       1 volume_store.go:219] persistentvolume "pvc-931a4cdc-5524-4d3d-aaec-715ddc68fc61" saved
	I1004 00:25:42.714359       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"931a4cdc-5524-4d3d-aaec-715ddc68fc61", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-931a4cdc-5524-4d3d-aaec-715ddc68fc61
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-488000 -n functional-488000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-488000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-fms84 kubernetes-dashboard-8694d4445c-45lvq
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-488000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-fms84 kubernetes-dashboard-8694d4445c-45lvq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-488000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-fms84 kubernetes-dashboard-8694d4445c-45lvq: exit status 1 (41.647208ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-488000/192.168.105.4
	Start Time:       Tue, 03 Oct 2023 17:26:10 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://ac4df190eca9d6a9e5c4eb9d887697137f8cb5f92f96435de54a3a48b02cf519
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 03 Oct 2023 17:26:12 -0700
	      Finished:     Tue, 03 Oct 2023 17:26:12 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fzn6x (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fzn6x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  19s   default-scheduler  Successfully assigned default/busybox-mount to functional-488000
	  Normal  Pulling    18s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     17s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.038s (1.038s including waiting)
	  Normal  Created    17s   kubelet            Created container mount-munger
	  Normal  Started    17s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-fms84" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-45lvq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-488000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-fms84 kubernetes-dashboard-8694d4445c-45lvq: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (41.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I1003 17:25:37.527035    2447 out.go:296] Setting OutFile to fd 1 ...
I1003 17:25:37.527294    2447 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:25:37.527299    2447 out.go:309] Setting ErrFile to fd 2...
I1003 17:25:37.527301    2447 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:25:37.527430    2447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:25:37.527684    2447 mustload.go:65] Loading cluster: functional-488000
I1003 17:25:37.527890    2447 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:25:37.531576    2447 out.go:177] 
W1003 17:25:37.535660    2447 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/monitor: connect: connection refused
W1003 17:25:37.535669    2447 out.go:239] * 
* 
W1003 17:25:37.536985    2447 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 17:25:37.539519    2447 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2446: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-329000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-329000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 1eb48772889a
	Removing intermediate container 1eb48772889a
	 ---> 83faf0d5defb
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 043f9ae9c8a1
	Removing intermediate container 043f9ae9c8a1
	 ---> 329dcfcd1187
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 99642807770f
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-329000 -n image-329000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-329000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| tunnel  | functional-488000 tunnel                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:25 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-488000 tunnel                                                                                        | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:25 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| addons  | functional-488000 addons list                                                                                   | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:25 PDT | 03 Oct 23 17:25 PDT |
	| addons  | functional-488000 addons list                                                                                   | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:25 PDT | 03 Oct 23 17:25 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-488000 service                                                                                       | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| service | functional-488000 service list                                                                                  | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	| service | functional-488000 service list                                                                                  | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-488000 service                                                                                       | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-488000                                                                                               | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-488000 service                                                                                       | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| mount   | -p functional-488000                                                                                            | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-488000 ssh findmnt                                                                                   | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-488000 ssh findmnt                                                                                   | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-488000 ssh -- ls                                                                                     | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-488000 ssh cat                                                                                       | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | /mount-9p/test-1696379169954988000                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-488000 ssh stat                                                                                      | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | /mount-9p/created-by-test                                                                                       |                   |         |         |                     |                     |
	| image   | functional-488000                                                                                               | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | image ls --format json                                                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-488000                                                                                               | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | image ls --format table                                                                                         |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-488000 ssh pgrep                                                                                     | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|         | buildkitd                                                                                                       |                   |         |         |                     |                     |
	| image   | functional-488000 image build -t                                                                                | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | localhost/my-image:functional-488000                                                                            |                   |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                                                                |                   |         |         |                     |                     |
	| image   | functional-488000 image ls                                                                                      | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	| delete  | -p functional-488000                                                                                            | functional-488000 | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	| start   | -p image-329000 --driver=qemu2                                                                                  | image-329000      | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:27 PDT |
	|         |                                                                                                                 |                   |         |         |                     |                     |
	| image   | build -t aaa:latest                                                                                             | image-329000      | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	|         | ./testdata/image-build/test-normal                                                                              |                   |         |         |                     |                     |
	|         | -p image-329000                                                                                                 |                   |         |         |                     |                     |
	| image   | build -t aaa:latest                                                                                             | image-329000      | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                                                                        |                   |         |         |                     |                     |
	|         | --build-opt=no-cache                                                                                            |                   |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                                                                              |                   |         |         |                     |                     |
	|         | image-329000                                                                                                    |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:26:42
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:26:42.811524    2851 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:26:42.811666    2851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:42.811668    2851 out.go:309] Setting ErrFile to fd 2...
	I1003 17:26:42.811670    2851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:42.811813    2851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:26:42.812943    2851 out.go:303] Setting JSON to false
	I1003 17:26:42.830587    2851 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1576,"bootTime":1696377626,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:26:42.830711    2851 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:26:42.834588    2851 out.go:177] * [image-329000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:26:42.841599    2851 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:26:42.841623    2851 notify.go:220] Checking for updates...
	I1003 17:26:42.845453    2851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:26:42.848615    2851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:26:42.851552    2851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:26:42.854450    2851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:26:42.857505    2851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:26:42.860693    2851 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:26:42.864489    2851 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:26:42.871531    2851 start.go:298] selected driver: qemu2
	I1003 17:26:42.871538    2851 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:26:42.871543    2851 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:26:42.871603    2851 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:26:42.874511    2851 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:26:42.880135    2851 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1003 17:26:42.880222    2851 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:26:42.880241    2851 cni.go:84] Creating CNI manager for ""
	I1003 17:26:42.880246    2851 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:26:42.880249    2851 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:26:42.880253    2851 start_flags.go:321] config:
	{Name:image-329000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:image-329000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:26:42.884701    2851 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:26:42.891441    2851 out.go:177] * Starting control plane node image-329000 in cluster image-329000
	I1003 17:26:42.895561    2851 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:26:42.895574    2851 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:26:42.895588    2851 cache.go:57] Caching tarball of preloaded images
	I1003 17:26:42.895647    2851 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:26:42.895651    2851 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:26:42.895825    2851 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/config.json ...
	I1003 17:26:42.895835    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/config.json: {Name:mkd7bca0dc0391720cf43d59c01cbe8818323453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:26:42.896024    2851 start.go:365] acquiring machines lock for image-329000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:26:42.896052    2851 start.go:369] acquired machines lock for "image-329000" in 24.667µs
	I1003 17:26:42.896061    2851 start.go:93] Provisioning new machine with config: &{Name:image-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:image-329000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:26:42.896086    2851 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:26:42.903499    2851 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1003 17:26:42.923338    2851 start.go:159] libmachine.API.Create for "image-329000" (driver="qemu2")
	I1003 17:26:42.923364    2851 client.go:168] LocalClient.Create starting
	I1003 17:26:42.923426    2851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:26:42.923456    2851 main.go:141] libmachine: Decoding PEM data...
	I1003 17:26:42.923463    2851 main.go:141] libmachine: Parsing certificate...
	I1003 17:26:42.923499    2851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:26:42.923515    2851 main.go:141] libmachine: Decoding PEM data...
	I1003 17:26:42.923520    2851 main.go:141] libmachine: Parsing certificate...
	I1003 17:26:42.923863    2851 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:26:43.144125    2851 main.go:141] libmachine: Creating SSH key...
	I1003 17:26:43.329235    2851 main.go:141] libmachine: Creating Disk image...
	I1003 17:26:43.329241    2851 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:26:43.329414    2851 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/disk.qcow2
	I1003 17:26:43.344747    2851 main.go:141] libmachine: STDOUT: 
	I1003 17:26:43.344760    2851 main.go:141] libmachine: STDERR: 
	I1003 17:26:43.344806    2851 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/disk.qcow2 +20000M
	I1003 17:26:43.352469    2851 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:26:43.352478    2851 main.go:141] libmachine: STDERR: 
	I1003 17:26:43.352494    2851 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/disk.qcow2
	I1003 17:26:43.352500    2851 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:26:43.352536    2851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:90:df:54:86:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/disk.qcow2
	I1003 17:26:43.392302    2851 main.go:141] libmachine: STDOUT: 
	I1003 17:26:43.392318    2851 main.go:141] libmachine: STDERR: 
	I1003 17:26:43.392321    2851 main.go:141] libmachine: Attempt 0
	I1003 17:26:43.392337    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:43.392402    2851 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1003 17:26:43.392418    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:26:43.392423    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:26:43.392427    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:26:45.394561    2851 main.go:141] libmachine: Attempt 1
	I1003 17:26:45.394608    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:45.394863    2851 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1003 17:26:45.394967    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:26:45.394994    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:26:45.395058    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:26:47.397204    2851 main.go:141] libmachine: Attempt 2
	I1003 17:26:47.397219    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:47.397412    2851 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1003 17:26:47.397444    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:26:47.397451    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:26:47.397456    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:26:49.399497    2851 main.go:141] libmachine: Attempt 3
	I1003 17:26:49.399509    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:49.399560    2851 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1003 17:26:49.399568    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:26:49.399573    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:26:49.399577    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:26:51.401585    2851 main.go:141] libmachine: Attempt 4
	I1003 17:26:51.401590    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:51.401634    2851 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1003 17:26:51.401639    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:26:51.401643    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:26:51.401647    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:26:53.403705    2851 main.go:141] libmachine: Attempt 5
	I1003 17:26:53.403716    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:53.403808    2851 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I1003 17:26:53.403816    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:26:53.403820    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:26:53.403824    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:26:55.405876    2851 main.go:141] libmachine: Attempt 6
	I1003 17:26:55.405891    2851 main.go:141] libmachine: Searching for 12:90:df:54:86:bb in /var/db/dhcpd_leases ...
	I1003 17:26:55.406015    2851 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:26:55.406029    2851 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:26:55.406032    2851 main.go:141] libmachine: Found match: 12:90:df:54:86:bb
	I1003 17:26:55.406044    2851 main.go:141] libmachine: IP: 192.168.105.5
	I1003 17:26:55.406049    2851 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1003 17:26:56.411274    2851 machine.go:88] provisioning docker machine ...
	I1003 17:26:56.411293    2851 buildroot.go:166] provisioning hostname "image-329000"
	I1003 17:26:56.411351    2851 main.go:141] libmachine: Using SSH client type: native
	I1003 17:26:56.411619    2851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104721e60] 0x1047245d0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1003 17:26:56.411623    2851 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-329000 && echo "image-329000" | sudo tee /etc/hostname
	I1003 17:26:56.487223    2851 main.go:141] libmachine: SSH cmd err, output: <nil>: image-329000
	
	I1003 17:26:56.487274    2851 main.go:141] libmachine: Using SSH client type: native
	I1003 17:26:56.487525    2851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104721e60] 0x1047245d0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1003 17:26:56.487532    2851 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-329000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-329000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-329000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:26:56.560765    2851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:26:56.560773    2851 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17345-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17345-986/.minikube}
	I1003 17:26:56.560780    2851 buildroot.go:174] setting up certificates
	I1003 17:26:56.560784    2851 provision.go:83] configureAuth start
	I1003 17:26:56.560787    2851 provision.go:138] copyHostCerts
	I1003 17:26:56.560844    2851 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem, removing ...
	I1003 17:26:56.560851    2851 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem
	I1003 17:26:56.560970    2851 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem (1082 bytes)
	I1003 17:26:56.561152    2851 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem, removing ...
	I1003 17:26:56.561154    2851 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem
	I1003 17:26:56.561195    2851 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem (1123 bytes)
	I1003 17:26:56.561288    2851 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem, removing ...
	I1003 17:26:56.561289    2851 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem
	I1003 17:26:56.561340    2851 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem (1679 bytes)
	I1003 17:26:56.561427    2851 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem org=jenkins.image-329000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-329000]
	I1003 17:26:56.651185    2851 provision.go:172] copyRemoteCerts
	I1003 17:26:56.651208    2851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:26:56.651213    2851 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/id_rsa Username:docker}
	I1003 17:26:56.689835    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:26:56.696943    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1003 17:26:56.704289    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 17:26:56.710890    2851 provision.go:86] duration metric: configureAuth took 150.105416ms
	I1003 17:26:56.710896    2851 buildroot.go:189] setting minikube options for container-runtime
	I1003 17:26:56.710992    2851 config.go:182] Loaded profile config "image-329000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:26:56.711025    2851 main.go:141] libmachine: Using SSH client type: native
	I1003 17:26:56.711242    2851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104721e60] 0x1047245d0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1003 17:26:56.711245    2851 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 17:26:56.782949    2851 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 17:26:56.782955    2851 buildroot.go:70] root file system type: tmpfs
	I1003 17:26:56.783014    2851 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 17:26:56.783067    2851 main.go:141] libmachine: Using SSH client type: native
	I1003 17:26:56.783334    2851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104721e60] 0x1047245d0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1003 17:26:56.783370    2851 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 17:26:56.859685    2851 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 17:26:56.859738    2851 main.go:141] libmachine: Using SSH client type: native
	I1003 17:26:56.860000    2851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104721e60] 0x1047245d0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1003 17:26:56.860008    2851 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 17:26:57.189382    2851 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 17:26:57.189390    2851 machine.go:91] provisioned docker machine in 778.124542ms
	I1003 17:26:57.189394    2851 client.go:171] LocalClient.Create took 14.266299s
	I1003 17:26:57.189412    2851 start.go:167] duration metric: libmachine.API.Create for "image-329000" took 14.266357583s
	I1003 17:26:57.189416    2851 start.go:300] post-start starting for "image-329000" (driver="qemu2")
	I1003 17:26:57.189420    2851 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:26:57.189498    2851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:26:57.189506    2851 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/id_rsa Username:docker}
	I1003 17:26:57.226046    2851 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:26:57.227526    2851 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 17:26:57.227535    2851 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/addons for local assets ...
	I1003 17:26:57.227602    2851 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/files for local assets ...
	I1003 17:26:57.227698    2851 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem -> 14472.pem in /etc/ssl/certs
	I1003 17:26:57.227795    2851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 17:26:57.230237    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem --> /etc/ssl/certs/14472.pem (1708 bytes)
	I1003 17:26:57.237449    2851 start.go:303] post-start completed in 48.030792ms
	I1003 17:26:57.237836    2851 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/config.json ...
	I1003 17:26:57.237991    2851 start.go:128] duration metric: createHost completed in 14.342175292s
	I1003 17:26:57.238015    2851 main.go:141] libmachine: Using SSH client type: native
	I1003 17:26:57.238227    2851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104721e60] 0x1047245d0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1003 17:26:57.238230    2851 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1003 17:26:57.308483    2851 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696379217.520050793
	
	I1003 17:26:57.308488    2851 fix.go:206] guest clock: 1696379217.520050793
	I1003 17:26:57.308491    2851 fix.go:219] Guest: 2023-10-03 17:26:57.520050793 -0700 PDT Remote: 2023-10-03 17:26:57.237993 -0700 PDT m=+14.447468876 (delta=282.057793ms)
	I1003 17:26:57.308501    2851 fix.go:190] guest clock delta is within tolerance: 282.057793ms
	I1003 17:26:57.308503    2851 start.go:83] releasing machines lock for "image-329000", held for 14.412722833s
	I1003 17:26:57.308806    2851 ssh_runner.go:195] Run: cat /version.json
	I1003 17:26:57.308812    2851 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/id_rsa Username:docker}
	I1003 17:26:57.308817    2851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:26:57.308835    2851 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/id_rsa Username:docker}
	I1003 17:26:57.346674    2851 ssh_runner.go:195] Run: systemctl --version
	I1003 17:26:57.390897    2851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:26:57.392861    2851 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:26:57.392891    2851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:26:57.398765    2851 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:26:57.398770    2851 start.go:469] detecting cgroup driver to use...
	I1003 17:26:57.398838    2851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:26:57.404473    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1003 17:26:57.407918    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 17:26:57.411119    2851 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 17:26:57.411139    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 17:26:57.414070    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:26:57.417406    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 17:26:57.420688    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:26:57.424048    2851 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:26:57.427086    2851 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 17:26:57.429908    2851 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:26:57.433053    2851 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:26:57.436099    2851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:26:57.496205    2851 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 17:26:57.504275    2851 start.go:469] detecting cgroup driver to use...
	I1003 17:26:57.504342    2851 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 17:26:57.510700    2851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:26:57.515497    2851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:26:57.521182    2851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:26:57.526421    2851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:26:57.530946    2851 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 17:26:57.569394    2851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:26:57.574660    2851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:26:57.579913    2851 ssh_runner.go:195] Run: which cri-dockerd
	I1003 17:26:57.581184    2851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 17:26:57.584303    2851 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 17:26:57.589584    2851 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 17:26:57.652487    2851 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 17:26:57.715906    2851 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 17:26:57.715961    2851 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 17:26:57.721412    2851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:26:57.786080    2851 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:26:58.946642    2851 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160568209s)
	I1003 17:26:58.946692    2851 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:26:59.024310    2851 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 17:26:59.083218    2851 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 17:26:59.143558    2851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:26:59.201744    2851 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 17:26:59.209194    2851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:26:59.276301    2851 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1003 17:26:59.298737    2851 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 17:26:59.298796    2851 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 17:26:59.301413    2851 start.go:537] Will wait 60s for crictl version
	I1003 17:26:59.301456    2851 ssh_runner.go:195] Run: which crictl
	I1003 17:26:59.302983    2851 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 17:26:59.321995    2851 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1003 17:26:59.322066    2851 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:26:59.333418    2851 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:26:59.346809    2851 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1003 17:26:59.346952    2851 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1003 17:26:59.348308    2851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:26:59.352326    2851 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:26:59.352372    2851 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:26:59.360889    2851 docker.go:664] Got preloaded images: 
	I1003 17:26:59.360893    2851 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1003 17:26:59.360924    2851 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:26:59.363920    2851 ssh_runner.go:195] Run: which lz4
	I1003 17:26:59.365167    2851 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1003 17:26:59.366410    2851 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 17:26:59.366422    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I1003 17:27:00.662456    2851 docker.go:628] Took 1.297334 seconds to copy over tarball
	I1003 17:27:00.662510    2851 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 17:27:01.691815    2851 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.029311667s)
	I1003 17:27:01.691824    2851 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 17:27:01.707322    2851 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:27:01.710305    2851 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1003 17:27:01.715341    2851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:27:01.774138    2851 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:27:03.245270    2851 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.471146292s)
	I1003 17:27:03.245345    2851 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:27:03.255725    2851 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 17:27:03.255731    2851 cache_images.go:84] Images are preloaded, skipping loading
	I1003 17:27:03.255787    2851 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 17:27:03.263355    2851 cni.go:84] Creating CNI manager for ""
	I1003 17:27:03.263361    2851 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:27:03.263368    2851 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 17:27:03.263376    2851 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-329000 NodeName:image-329000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:27:03.263442    2851 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-329000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:27:03.263471    2851 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-329000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:image-329000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 17:27:03.263525    2851 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1003 17:27:03.266430    2851 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:27:03.266454    2851 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:27:03.269292    2851 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1003 17:27:03.274437    2851 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:27:03.279296    2851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I1003 17:27:03.284556    2851 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I1003 17:27:03.285887    2851 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:27:03.289765    2851 certs.go:56] Setting up /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000 for IP: 192.168.105.5
	I1003 17:27:03.289772    2851 certs.go:190] acquiring lock for shared ca certs: {Name:mk60f926c1ccb065a30406b60af36acc708e601e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.289900    2851 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key
	I1003 17:27:03.289935    2851 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key
	I1003 17:27:03.289959    2851 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/client.key
	I1003 17:27:03.289965    2851 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/client.crt with IP's: []
	I1003 17:27:03.477304    2851 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/client.crt ...
	I1003 17:27:03.477309    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/client.crt: {Name:mk3db56b1c4c181fe6a243e27789f44d44e3207a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.477582    2851 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/client.key ...
	I1003 17:27:03.477584    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/client.key: {Name:mkbad821b2cfe8a5ca0b314c9aeedb5e7aab7bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.477700    2851 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.key.e69b33ca
	I1003 17:27:03.477705    2851 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 17:27:03.556883    2851 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.crt.e69b33ca ...
	I1003 17:27:03.556885    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.crt.e69b33ca: {Name:mk22282108492de843003fa615aea06c2e672e49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.557020    2851 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.key.e69b33ca ...
	I1003 17:27:03.557022    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.key.e69b33ca: {Name:mk1edf0f0e102ceb11cf1dfcc0f4346b4bb20a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.557125    2851 certs.go:337] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.crt
	I1003 17:27:03.557370    2851 certs.go:341] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.key
	I1003 17:27:03.557499    2851 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.key
	I1003 17:27:03.557505    2851 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.crt with IP's: []
	I1003 17:27:03.891443    2851 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.crt ...
	I1003 17:27:03.891450    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.crt: {Name:mk08f1e15dfdc51a6693a2e7ac0d0b5bf68e0e92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.891760    2851 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.key ...
	I1003 17:27:03.891762    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.key: {Name:mk7441b245dfde48464bd8e2024e39fb28988894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:03.892006    2851 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447.pem (1338 bytes)
	W1003 17:27:03.892042    2851 certs.go:433] ignoring /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447_empty.pem, impossibly tiny 0 bytes
	I1003 17:27:03.892047    2851 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:27:03.892066    2851 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem (1082 bytes)
	I1003 17:27:03.892082    2851 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:27:03.892098    2851 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem (1679 bytes)
	I1003 17:27:03.892136    2851 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem (1708 bytes)
	I1003 17:27:03.892514    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 17:27:03.900596    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 17:27:03.907815    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:27:03.914697    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/image-329000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 17:27:03.921514    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:27:03.928891    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 17:27:03.936143    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:27:03.942656    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 17:27:03.949485    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:27:03.956752    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447.pem --> /usr/share/ca-certificates/1447.pem (1338 bytes)
	I1003 17:27:03.963866    2851 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem --> /usr/share/ca-certificates/14472.pem (1708 bytes)
	I1003 17:27:03.970457    2851 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:27:03.975159    2851 ssh_runner.go:195] Run: openssl version
	I1003 17:27:03.977209    2851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:27:03.980539    2851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:03.981978    2851 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:03.981996    2851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:03.983757    2851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:27:03.986663    2851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1447.pem && ln -fs /usr/share/ca-certificates/1447.pem /etc/ssl/certs/1447.pem"
	I1003 17:27:03.989684    2851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1447.pem
	I1003 17:27:03.991152    2851 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:23 /usr/share/ca-certificates/1447.pem
	I1003 17:27:03.991167    2851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1447.pem
	I1003 17:27:03.992969    2851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1447.pem /etc/ssl/certs/51391683.0"
	I1003 17:27:03.996043    2851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14472.pem && ln -fs /usr/share/ca-certificates/14472.pem /etc/ssl/certs/14472.pem"
	I1003 17:27:03.999045    2851 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14472.pem
	I1003 17:27:04.000425    2851 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:23 /usr/share/ca-certificates/14472.pem
	I1003 17:27:04.000441    2851 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14472.pem
	I1003 17:27:04.002211    2851 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14472.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 17:27:04.005321    2851 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 17:27:04.006547    2851 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 17:27:04.006576    2851 kubeadm.go:404] StartCluster: {Name:image-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:image-329000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:27:04.006638    2851 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 17:27:04.019426    2851 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:27:04.022133    2851 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:27:04.025159    2851 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:27:04.027908    2851 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:27:04.027919    2851 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 17:27:04.049526    2851 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1003 17:27:04.049556    2851 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 17:27:04.107803    2851 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:27:04.107845    2851 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:27:04.107898    2851 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 17:27:04.206877    2851 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:27:04.216062    2851 out.go:204]   - Generating certificates and keys ...
	I1003 17:27:04.216095    2851 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 17:27:04.216124    2851 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 17:27:04.362192    2851 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:27:04.445906    2851 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:27:04.648758    2851 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:27:04.767841    2851 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 17:27:04.921939    2851 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 17:27:04.921996    2851 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-329000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1003 17:27:05.069604    2851 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 17:27:05.069666    2851 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-329000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1003 17:27:05.094779    2851 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:27:05.233795    2851 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:27:05.420933    2851 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 17:27:05.420965    2851 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:27:05.489593    2851 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:27:05.545074    2851 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:27:05.819910    2851 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:27:06.004802    2851 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:27:06.005263    2851 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:27:06.006685    2851 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:27:06.013852    2851 out.go:204]   - Booting up control plane ...
	I1003 17:27:06.013941    2851 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:27:06.013980    2851 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:27:06.014018    2851 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:27:06.014069    2851 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:27:06.014108    2851 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:27:06.014130    2851 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 17:27:06.103233    2851 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 17:27:10.106698    2851 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003752 seconds
	I1003 17:27:10.106752    2851 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 17:27:10.116311    2851 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 17:27:10.625848    2851 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 17:27:10.625935    2851 kubeadm.go:322] [mark-control-plane] Marking the node image-329000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 17:27:11.130928    2851 kubeadm.go:322] [bootstrap-token] Using token: 43jofe.or1fcbtcuku2ig6w
	I1003 17:27:11.136144    2851 out.go:204]   - Configuring RBAC rules ...
	I1003 17:27:11.136196    2851 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 17:27:11.137350    2851 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 17:27:11.145699    2851 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 17:27:11.146972    2851 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 17:27:11.148203    2851 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 17:27:11.149322    2851 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 17:27:11.153218    2851 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 17:27:11.321423    2851 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1003 17:27:11.539839    2851 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1003 17:27:11.540258    2851 kubeadm.go:322] 
	I1003 17:27:11.540290    2851 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1003 17:27:11.540292    2851 kubeadm.go:322] 
	I1003 17:27:11.540326    2851 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1003 17:27:11.540329    2851 kubeadm.go:322] 
	I1003 17:27:11.540340    2851 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1003 17:27:11.540376    2851 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 17:27:11.540416    2851 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 17:27:11.540419    2851 kubeadm.go:322] 
	I1003 17:27:11.540443    2851 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1003 17:27:11.540447    2851 kubeadm.go:322] 
	I1003 17:27:11.540472    2851 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 17:27:11.540474    2851 kubeadm.go:322] 
	I1003 17:27:11.540497    2851 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1003 17:27:11.540538    2851 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 17:27:11.540570    2851 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 17:27:11.540572    2851 kubeadm.go:322] 
	I1003 17:27:11.540621    2851 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 17:27:11.540663    2851 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1003 17:27:11.540666    2851 kubeadm.go:322] 
	I1003 17:27:11.540715    2851 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 43jofe.or1fcbtcuku2ig6w \
	I1003 17:27:11.540772    2851 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 \
	I1003 17:27:11.540782    2851 kubeadm.go:322] 	--control-plane 
	I1003 17:27:11.540784    2851 kubeadm.go:322] 
	I1003 17:27:11.540831    2851 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1003 17:27:11.540833    2851 kubeadm.go:322] 
	I1003 17:27:11.540868    2851 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 43jofe.or1fcbtcuku2ig6w \
	I1003 17:27:11.540916    2851 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 
	I1003 17:27:11.541017    2851 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:27:11.541023    2851 cni.go:84] Creating CNI manager for ""
	I1003 17:27:11.541031    2851 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:27:11.547527    2851 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 17:27:11.553495    2851 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 17:27:11.556637    2851 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1003 17:27:11.561524    2851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 17:27:11.561588    2851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:27:11.561591    2851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a minikube.k8s.io/name=image-329000 minikube.k8s.io/updated_at=2023_10_03T17_27_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:27:11.573725    2851 ops.go:34] apiserver oom_adj: -16
	I1003 17:27:11.620286    2851 kubeadm.go:1081] duration metric: took 58.723791ms to wait for elevateKubeSystemPrivileges.
	I1003 17:27:11.626342    2851 kubeadm.go:406] StartCluster complete in 7.619909792s
	I1003 17:27:11.626354    2851 settings.go:142] acquiring lock: {Name:mkad5f21e92defa14247d9a0adf05a6e4272cec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:11.626425    2851 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:27:11.626984    2851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/kubeconfig: {Name:mke3e06a6a2057954076f4772b87ca4469721c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:11.627324    2851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 17:27:11.627407    2851 config.go:182] Loaded profile config "image-329000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:27:11.627411    2851 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1003 17:27:11.627542    2851 addons.go:69] Setting default-storageclass=true in profile "image-329000"
	I1003 17:27:11.627549    2851 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-329000"
	I1003 17:27:11.627568    2851 addons.go:69] Setting storage-provisioner=true in profile "image-329000"
	I1003 17:27:11.627574    2851 addons.go:231] Setting addon storage-provisioner=true in "image-329000"
	I1003 17:27:11.627601    2851 host.go:66] Checking if "image-329000" exists ...
	I1003 17:27:11.628923    2851 addons.go:231] Setting addon default-storageclass=true in "image-329000"
	I1003 17:27:11.628930    2851 host.go:66] Checking if "image-329000" exists ...
	I1003 17:27:11.633001    2851 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:27:11.629573    2851 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 17:27:11.637021    2851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 17:27:11.637032    2851 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/id_rsa Username:docker}
	I1003 17:27:11.637099    2851 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:27:11.637102    2851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 17:27:11.637105    2851 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/image-329000/id_rsa Username:docker}
	I1003 17:27:11.639536    2851 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-329000" context rescaled to 1 replicas
	I1003 17:27:11.639550    2851 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:27:11.641870    2851 out.go:177] * Verifying Kubernetes components...
	I1003 17:27:11.646048    2851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:27:11.672310    2851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 17:27:11.672658    2851 api_server.go:52] waiting for apiserver process to appear ...
	I1003 17:27:11.672690    2851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 17:27:11.710895    2851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 17:27:11.731176    2851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:27:12.083584    2851 api_server.go:72] duration metric: took 444.028792ms to wait for apiserver process to appear ...
	I1003 17:27:12.083590    2851 api_server.go:88] waiting for apiserver healthz status ...
	I1003 17:27:12.083597    2851 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I1003 17:27:12.083629    2851 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1003 17:27:12.086708    2851 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I1003 17:27:12.087527    2851 api_server.go:141] control plane version: v1.28.2
	I1003 17:27:12.087531    2851 api_server.go:131] duration metric: took 3.938709ms to wait for apiserver health ...
	I1003 17:27:12.087535    2851 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 17:27:12.091348    2851 system_pods.go:59] 4 kube-system pods found
	I1003 17:27:12.091354    2851 system_pods.go:61] "etcd-image-329000" [dc37c778-a1d6-4642-8226-af9ec70505b9] Pending
	I1003 17:27:12.091356    2851 system_pods.go:61] "kube-apiserver-image-329000" [77d61d7c-d103-42b6-afaf-66eb46cc96a9] Pending
	I1003 17:27:12.091358    2851 system_pods.go:61] "kube-controller-manager-image-329000" [a0ea702e-4d11-495f-b5a4-9cfc38ed99d2] Pending
	I1003 17:27:12.091359    2851 system_pods.go:61] "kube-scheduler-image-329000" [7724e046-6efc-4b43-9a9f-2eb0bca483ce] Pending
	I1003 17:27:12.091361    2851 system_pods.go:74] duration metric: took 3.824542ms to wait for pod list to return data ...
	I1003 17:27:12.091364    2851 kubeadm.go:581] duration metric: took 451.814458ms to wait for : map[apiserver:true system_pods:true] ...
	I1003 17:27:12.091369    2851 node_conditions.go:102] verifying NodePressure condition ...
	I1003 17:27:12.092669    2851 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1003 17:27:12.092677    2851 node_conditions.go:123] node cpu capacity is 2
	I1003 17:27:12.092682    2851 node_conditions.go:105] duration metric: took 1.311ms to run NodePressure ...
	I1003 17:27:12.092686    2851 start.go:228] waiting for startup goroutines ...
	I1003 17:27:12.181196    2851 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1003 17:27:12.189261    2851 addons.go:502] enable addons completed in 561.9175ms: enabled=[default-storageclass storage-provisioner]
	I1003 17:27:12.189275    2851 start.go:233] waiting for cluster config update ...
	I1003 17:27:12.189279    2851 start.go:242] writing updated cluster config ...
	I1003 17:27:12.189577    2851 ssh_runner.go:195] Run: rm -f paused
	I1003 17:27:12.218523    2851 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I1003 17:27:12.223222    2851 out.go:177] * Done! kubectl is now configured to use "image-329000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-04 00:26:54 UTC, ends at Wed 2023-10-04 00:27:14 UTC. --
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.380908756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.380915381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.382807006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.382827214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.382833756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.382838006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.411128548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.411259839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.411287506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.411312714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.467112714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.467135964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.467141506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:27:07 image-329000 dockerd[1114]: time="2023-10-04T00:27:07.467252339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:13 image-329000 dockerd[1108]: time="2023-10-04T00:27:13.457792926Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 04 00:27:13 image-329000 dockerd[1108]: time="2023-10-04T00:27:13.583558801Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 04 00:27:13 image-329000 dockerd[1108]: time="2023-10-04T00:27:13.600275884Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.640624759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.640658426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.640665926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.640866051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:27:13 image-329000 dockerd[1108]: time="2023-10-04T00:27:13.761794426Z" level=info msg="ignoring event" container=99642807770fb59abd00c452639d912eaf6991fec6efbaf3c1975abe8812d037 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.761964884Z" level=info msg="shim disconnected" id=99642807770fb59abd00c452639d912eaf6991fec6efbaf3c1975abe8812d037 namespace=moby
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.761994009Z" level=warning msg="cleaning up after shim disconnected" id=99642807770fb59abd00c452639d912eaf6991fec6efbaf3c1975abe8812d037 namespace=moby
	Oct 04 00:27:13 image-329000 dockerd[1114]: time="2023-10-04T00:27:13.761998592Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6eedc8f439e51       64fc40cee3716       7 seconds ago       Running             kube-scheduler            0                   8fd7f9b4f4971       kube-scheduler-image-329000
	913698178eb82       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   12e41af44ded2       etcd-image-329000
	cc281224b290d       89d57b83c1786       7 seconds ago       Running             kube-controller-manager   0                   0e06c19eb0a31       kube-controller-manager-image-329000
	5b0a0cfabdd3c       30bb499447fe1       7 seconds ago       Running             kube-apiserver            0                   e992b4fcc7fcf       kube-apiserver-image-329000
	
	* 
	* ==> describe nodes <==
	* Name:               image-329000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-329000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a
	                    minikube.k8s.io/name=image-329000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_03T17_27_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:27:09 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-329000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:27:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:27:11 +0000   Wed, 04 Oct 2023 00:27:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:27:11 +0000   Wed, 04 Oct 2023 00:27:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:27:11 +0000   Wed, 04 Oct 2023 00:27:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 04 Oct 2023 00:27:11 +0000   Wed, 04 Oct 2023 00:27:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-329000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 1977ce02a99c4e31b40d7a9cec05b494
	  System UUID:                1977ce02a99c4e31b40d7a9cec05b494
	  Boot ID:                    0cc7b4ce-da3e-4f35-a8fb-0b5286237b43
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-329000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-329000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-329000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-329000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-329000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-329000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-329000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Oct 4 00:26] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.645406] EINJ: EINJ table not found.
	[  +0.523878] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044108] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000885] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.134891] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.060065] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.440283] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.154368] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[  +0.063115] systemd-fstab-generator[719]: Ignoring "noauto" for root device
	[  +0.068740] systemd-fstab-generator[732]: Ignoring "noauto" for root device
	[  +1.147154] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.092392] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058828] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062059] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.059622] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.070508] systemd-fstab-generator[995]: Ignoring "noauto" for root device
	[Oct 4 00:27] systemd-fstab-generator[1101]: Ignoring "noauto" for root device
	[  +4.323336] systemd-fstab-generator[1485]: Ignoring "noauto" for root device
	[  +0.353642] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.760872] systemd-fstab-generator[2195]: Ignoring "noauto" for root device
	[  +2.227200] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [913698178eb8] <==
	* {"level":"info","ts":"2023-10-04T00:27:07.508691Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-10-04T00:27:07.508764Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-10-04T00:27:07.509063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-10-04T00:27:07.509122Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-10-04T00:27:07.50864Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T00:27:07.509242Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T00:27:07.509282Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T00:27:08.49715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-04T00:27:08.497303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-04T00:27:08.497374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-10-04T00:27:08.497735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-10-04T00:27:08.497848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-10-04T00:27:08.498032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-10-04T00:27:08.498087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-10-04T00:27:08.500125Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:27:08.501327Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-329000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T00:27:08.501619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:27:08.501761Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:27:08.502091Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T00:27:08.5021Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:27:08.502325Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T00:27:08.502427Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T00:27:08.502146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T00:27:08.505067Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-10-04T00:27:08.505106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  00:27:14 up 0 min,  0 users,  load average: 0.47, 0.11, 0.04
	Linux image-329000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5b0a0cfabdd3] <==
	* I1004 00:27:09.159323       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1004 00:27:09.159337       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1004 00:27:09.159930       1 controller.go:624] quota admission added evaluator for: namespaces
	I1004 00:27:09.161370       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 00:27:09.161998       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1004 00:27:09.162259       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1004 00:27:09.162522       1 aggregator.go:166] initial CRD sync complete...
	I1004 00:27:09.162534       1 autoregister_controller.go:141] Starting autoregister controller
	I1004 00:27:09.162536       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 00:27:09.162539       1 cache.go:39] Caches are synced for autoregister controller
	I1004 00:27:09.168290       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1004 00:27:09.179931       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 00:27:10.063697       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 00:27:10.065769       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 00:27:10.065815       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 00:27:10.206527       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 00:27:10.216777       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 00:27:10.263004       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 00:27:10.265031       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I1004 00:27:10.265382       1 controller.go:624] quota admission added evaluator for: endpoints
	I1004 00:27:10.267011       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 00:27:11.089825       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1004 00:27:11.528782       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1004 00:27:11.532855       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 00:27:11.538220       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [cc281224b290] <==
	* I1004 00:27:13.990761       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I1004 00:27:13.990770       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I1004 00:27:13.990777       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="endpoints"
	I1004 00:27:13.990785       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I1004 00:27:13.990797       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I1004 00:27:13.990812       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I1004 00:27:13.990824       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="limitranges"
	I1004 00:27:13.990829       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I1004 00:27:13.990834       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I1004 00:27:13.990842       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	W1004 00:27:13.990855       1 shared_informer.go:593] resyncPeriod 13h13m49.883779725s is smaller than resyncCheckPeriod 17h20m20.250511555s and the informer has already started. Changing it to 17h20m20.250511555s
	I1004 00:27:13.990870       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I1004 00:27:13.990876       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I1004 00:27:13.990882       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I1004 00:27:13.990893       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I1004 00:27:13.990898       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I1004 00:27:13.990906       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I1004 00:27:13.990912       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I1004 00:27:13.990925       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I1004 00:27:13.991004       1 resource_quota_controller.go:295] "Starting resource quota controller"
	I1004 00:27:13.991009       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1004 00:27:13.991017       1 resource_quota_monitor.go:291] "QuotaMonitor running"
	I1004 00:27:14.241282       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I1004 00:27:14.241456       1 namespace_controller.go:197] "Starting namespace controller"
	I1004 00:27:14.241460       1 shared_informer.go:311] Waiting for caches to sync for namespace
	
	* 
	* ==> kube-scheduler [6eedc8f439e5] <==
	* W1004 00:27:09.136110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 00:27:09.136122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 00:27:09.136145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:27:09.136244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 00:27:09.958955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:27:09.959008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 00:27:09.967585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:27:09.967598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 00:27:09.973772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 00:27:09.973782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 00:27:10.029573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:27:10.029583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 00:27:10.053232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:27:10.053246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:27:10.072043       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 00:27:10.072059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 00:27:10.100152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:27:10.100245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 00:27:10.105802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 00:27:10.105850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 00:27:10.124313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 00:27:10.124331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 00:27:10.133059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:27:10.133117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1004 00:27:10.424871       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:26:54 UTC, ends at Wed 2023-10-04 00:27:14 UTC. --
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.688383    2201 topology_manager.go:215] "Topology Admit Handler" podUID="c4eaea8f307aeb9e3b1f69475b461d8c" podNamespace="kube-system" podName="kube-controller-manager-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.688400    2201 topology_manager.go:215] "Topology Admit Handler" podUID="84796ce4422a8feb25b67e35bf823db7" podNamespace="kube-system" podName="kube-scheduler-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.688412    2201 topology_manager.go:215] "Topology Admit Handler" podUID="2414ef2e87d590c46acf59bd230af081" podNamespace="kube-system" podName="etcd-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.691144    2201 kubelet_node_status.go:70] "Attempting to register node" node="image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.696447    2201 kubelet_node_status.go:108] "Node was previously registered" node="image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.696484    2201 kubelet_node_status.go:73] "Successfully registered node" node="image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786420    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/2414ef2e87d590c46acf59bd230af081-etcd-certs\") pod \"etcd-image-329000\" (UID: \"2414ef2e87d590c46acf59bd230af081\") " pod="kube-system/etcd-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786437    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08275d1e965d432205e5708a2ca5b47d-ca-certs\") pod \"kube-apiserver-image-329000\" (UID: \"08275d1e965d432205e5708a2ca5b47d\") " pod="kube-system/kube-apiserver-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786448    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08275d1e965d432205e5708a2ca5b47d-k8s-certs\") pod \"kube-apiserver-image-329000\" (UID: \"08275d1e965d432205e5708a2ca5b47d\") " pod="kube-system/kube-apiserver-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786457    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4eaea8f307aeb9e3b1f69475b461d8c-ca-certs\") pod \"kube-controller-manager-image-329000\" (UID: \"c4eaea8f307aeb9e3b1f69475b461d8c\") " pod="kube-system/kube-controller-manager-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786676    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4eaea8f307aeb9e3b1f69475b461d8c-flexvolume-dir\") pod \"kube-controller-manager-image-329000\" (UID: \"c4eaea8f307aeb9e3b1f69475b461d8c\") " pod="kube-system/kube-controller-manager-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786688    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4eaea8f307aeb9e3b1f69475b461d8c-kubeconfig\") pod \"kube-controller-manager-image-329000\" (UID: \"c4eaea8f307aeb9e3b1f69475b461d8c\") " pod="kube-system/kube-controller-manager-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786697    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4eaea8f307aeb9e3b1f69475b461d8c-usr-share-ca-certificates\") pod \"kube-controller-manager-image-329000\" (UID: \"c4eaea8f307aeb9e3b1f69475b461d8c\") " pod="kube-system/kube-controller-manager-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786707    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08275d1e965d432205e5708a2ca5b47d-usr-share-ca-certificates\") pod \"kube-apiserver-image-329000\" (UID: \"08275d1e965d432205e5708a2ca5b47d\") " pod="kube-system/kube-apiserver-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786715    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4eaea8f307aeb9e3b1f69475b461d8c-k8s-certs\") pod \"kube-controller-manager-image-329000\" (UID: \"c4eaea8f307aeb9e3b1f69475b461d8c\") " pod="kube-system/kube-controller-manager-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786744    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84796ce4422a8feb25b67e35bf823db7-kubeconfig\") pod \"kube-scheduler-image-329000\" (UID: \"84796ce4422a8feb25b67e35bf823db7\") " pod="kube-system/kube-scheduler-image-329000"
	Oct 04 00:27:11 image-329000 kubelet[2201]: I1004 00:27:11.786754    2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/2414ef2e87d590c46acf59bd230af081-etcd-data\") pod \"etcd-image-329000\" (UID: \"2414ef2e87d590c46acf59bd230af081\") " pod="kube-system/etcd-image-329000"
	Oct 04 00:27:12 image-329000 kubelet[2201]: I1004 00:27:12.570691    2201 apiserver.go:52] "Watching apiserver"
	Oct 04 00:27:12 image-329000 kubelet[2201]: I1004 00:27:12.586048    2201 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 04 00:27:12 image-329000 kubelet[2201]: E1004 00:27:12.642118    2201 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-329000\" already exists" pod="kube-system/kube-apiserver-image-329000"
	Oct 04 00:27:12 image-329000 kubelet[2201]: E1004 00:27:12.647479    2201 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-329000\" already exists" pod="kube-system/kube-scheduler-image-329000"
	Oct 04 00:27:12 image-329000 kubelet[2201]: I1004 00:27:12.648447    2201 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-329000" podStartSLOduration=1.648412425 podCreationTimestamp="2023-10-04 00:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 00:27:12.643625342 +0000 UTC m=+1.126923960" watchObservedRunningTime="2023-10-04 00:27:12.648412425 +0000 UTC m=+1.131711043"
	Oct 04 00:27:12 image-329000 kubelet[2201]: I1004 00:27:12.655980    2201 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-329000" podStartSLOduration=1.655948842 podCreationTimestamp="2023-10-04 00:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 00:27:12.654916342 +0000 UTC m=+1.138214960" watchObservedRunningTime="2023-10-04 00:27:12.655948842 +0000 UTC m=+1.139247418"
	Oct 04 00:27:12 image-329000 kubelet[2201]: I1004 00:27:12.656031    2201 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-329000" podStartSLOduration=1.656024009 podCreationTimestamp="2023-10-04 00:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 00:27:12.648557425 +0000 UTC m=+1.131856043" watchObservedRunningTime="2023-10-04 00:27:12.656024009 +0000 UTC m=+1.139322626"
	Oct 04 00:27:12 image-329000 kubelet[2201]: I1004 00:27:12.660496    2201 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-329000" podStartSLOduration=1.660480717 podCreationTimestamp="2023-10-04 00:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 00:27:12.660408675 +0000 UTC m=+1.143707293" watchObservedRunningTime="2023-10-04 00:27:12.660480717 +0000 UTC m=+1.143779293"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-329000 -n image-329000
helpers_test.go:261: (dbg) Run:  kubectl --context image-329000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-329000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-329000 describe pod storage-provisioner: exit status 1 (37.7875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-329000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (52.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-830000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-830000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.033638375s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-830000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-830000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c86e5cc9-0af5-444e-a040-dd4949406f76] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c86e5cc9-0af5-444e-a040-dd4949406f76] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.014067833s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-830000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.034170208s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons disable ingress-dns --alsologtostderr -v=1: (8.370814375s)
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons disable ingress --alsologtostderr -v=1: (7.107977209s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-830000 -n ingress-addon-legacy-830000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount   | -p functional-488000                                                                                            | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                             |         |         |                     |                     |
	| ssh     | functional-488000 ssh findmnt                                                                                   | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                             |         |         |                     |                     |
	| ssh     | functional-488000 ssh findmnt                                                                                   | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                             |         |         |                     |                     |
	| ssh     | functional-488000 ssh -- ls                                                                                     | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | -la /mount-9p                                                                                                   |                             |         |         |                     |                     |
	| ssh     | functional-488000 ssh cat                                                                                       | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | /mount-9p/test-1696379169954988000                                                                              |                             |         |         |                     |                     |
	| ssh     | functional-488000 ssh stat                                                                                      | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | /mount-9p/created-by-test                                                                                       |                             |         |         |                     |                     |
	| image   | functional-488000                                                                                               | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | image ls --format json                                                                                          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                             |         |         |                     |                     |
	| image   | functional-488000                                                                                               | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | image ls --format table                                                                                         |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                             |         |         |                     |                     |
	| ssh     | functional-488000 ssh pgrep                                                                                     | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT |                     |
	|         | buildkitd                                                                                                       |                             |         |         |                     |                     |
	| image   | functional-488000 image build -t                                                                                | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	|         | localhost/my-image:functional-488000                                                                            |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                                                                |                             |         |         |                     |                     |
	| image   | functional-488000 image ls                                                                                      | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	| delete  | -p functional-488000                                                                                            | functional-488000           | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:26 PDT |
	| start   | -p image-329000 --driver=qemu2                                                                                  | image-329000                | jenkins | v1.31.2 | 03 Oct 23 17:26 PDT | 03 Oct 23 17:27 PDT |
	|         |                                                                                                                 |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                                                             | image-329000                | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	|         | ./testdata/image-build/test-normal                                                                              |                             |         |         |                     |                     |
	|         | -p image-329000                                                                                                 |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                                                             | image-329000                | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                                                                        |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                                                                            |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                                                                              |                             |         |         |                     |                     |
	|         | image-329000                                                                                                    |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                                                             | image-329000                | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	|         | ./testdata/image-build/test-normal                                                                              |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                                                                         |                             |         |         |                     |                     |
	|         | image-329000                                                                                                    |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                                                             | image-329000                | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	|         | -f inner/Dockerfile                                                                                             |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                                                                                   |                             |         |         |                     |                     |
	|         | -p image-329000                                                                                                 |                             |         |         |                     |                     |
	| delete  | -p image-329000                                                                                                 | image-329000                | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:27 PDT |
	| start   | -p ingress-addon-legacy-830000                                                                                  | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:27 PDT | 03 Oct 23 17:28 PDT |
	|         | --kubernetes-version=v1.18.20                                                                                   |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                                                          |                             |         |         |                     |                     |
	|         | --driver=qemu2                                                                                                  |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-830000                                                                                     | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:28 PDT | 03 Oct 23 17:28 PDT |
	|         | addons enable ingress                                                                                           |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                                                          |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-830000                                                                                     | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:28 PDT | 03 Oct 23 17:28 PDT |
	|         | addons enable ingress-dns                                                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                                                          |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-830000                                                                                     | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:29 PDT | 03 Oct 23 17:29 PDT |
	|         | ssh curl -s http://127.0.0.1/                                                                                   |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                                                    |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-830000 ip                                                                                  | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:29 PDT | 03 Oct 23 17:29 PDT |
	| addons  | ingress-addon-legacy-830000                                                                                     | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:29 PDT | 03 Oct 23 17:29 PDT |
	|         | addons disable ingress-dns                                                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-830000                                                                                     | ingress-addon-legacy-830000 | jenkins | v1.31.2 | 03 Oct 23 17:29 PDT | 03 Oct 23 17:29 PDT |
	|         | addons disable ingress                                                                                          |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                             |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:27:14
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:27:14.733153    2898 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:27:14.733297    2898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:27:14.733300    2898 out.go:309] Setting ErrFile to fd 2...
	I1003 17:27:14.733302    2898 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:27:14.733426    2898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:27:14.734457    2898 out.go:303] Setting JSON to false
	I1003 17:27:14.750729    2898 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1608,"bootTime":1696377626,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:27:14.750809    2898 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:27:14.755023    2898 out.go:177] * [ingress-addon-legacy-830000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:27:14.759901    2898 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:27:14.759997    2898 notify.go:220] Checking for updates...
	I1003 17:27:14.763898    2898 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:27:14.766815    2898 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:27:14.769880    2898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:27:14.772894    2898 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:27:14.775803    2898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:27:14.779109    2898 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:27:14.782837    2898 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:27:14.789848    2898 start.go:298] selected driver: qemu2
	I1003 17:27:14.789854    2898 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:27:14.789859    2898 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:27:14.792203    2898 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:27:14.794931    2898 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:27:14.797974    2898 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:27:14.798012    2898 cni.go:84] Creating CNI manager for ""
	I1003 17:27:14.798023    2898 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:27:14.798029    2898 start_flags.go:321] config:
	{Name:ingress-addon-legacy-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:27:14.802624    2898 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:27:14.809842    2898 out.go:177] * Starting control plane node ingress-addon-legacy-830000 in cluster ingress-addon-legacy-830000
	I1003 17:27:14.813910    2898 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 17:27:14.871224    2898 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1003 17:27:14.871235    2898 cache.go:57] Caching tarball of preloaded images
	I1003 17:27:14.871394    2898 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 17:27:14.875954    2898 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1003 17:27:14.883844    2898 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:27:14.971477    2898 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1003 17:27:23.175286    2898 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:27:23.175440    2898 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:27:23.928301    2898 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1003 17:27:23.928496    2898 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/config.json ...
	I1003 17:27:23.928512    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/config.json: {Name:mkf1c5f2da4dff3bc762c57195c0f4a3d1928db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:23.928737    2898 start.go:365] acquiring machines lock for ingress-addon-legacy-830000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:27:23.928765    2898 start.go:369] acquired machines lock for "ingress-addon-legacy-830000" in 20.625µs
	I1003 17:27:23.928776    2898 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:27:23.928810    2898 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:27:23.933828    2898 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1003 17:27:23.949006    2898 start.go:159] libmachine.API.Create for "ingress-addon-legacy-830000" (driver="qemu2")
	I1003 17:27:23.949023    2898 client.go:168] LocalClient.Create starting
	I1003 17:27:23.949099    2898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:27:23.949127    2898 main.go:141] libmachine: Decoding PEM data...
	I1003 17:27:23.949147    2898 main.go:141] libmachine: Parsing certificate...
	I1003 17:27:23.949183    2898 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:27:23.949200    2898 main.go:141] libmachine: Decoding PEM data...
	I1003 17:27:23.949208    2898 main.go:141] libmachine: Parsing certificate...
	I1003 17:27:23.949537    2898 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:27:24.058983    2898 main.go:141] libmachine: Creating SSH key...
	I1003 17:27:24.156962    2898 main.go:141] libmachine: Creating Disk image...
	I1003 17:27:24.156967    2898 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:27:24.157142    2898 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/disk.qcow2
	I1003 17:27:24.166220    2898 main.go:141] libmachine: STDOUT: 
	I1003 17:27:24.166233    2898 main.go:141] libmachine: STDERR: 
	I1003 17:27:24.166287    2898 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/disk.qcow2 +20000M
	I1003 17:27:24.173751    2898 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:27:24.173766    2898 main.go:141] libmachine: STDERR: 
	I1003 17:27:24.173788    2898 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/disk.qcow2
	I1003 17:27:24.173801    2898 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:27:24.173840    2898 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:b3:d4:48:b1:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/disk.qcow2
	I1003 17:27:24.208270    2898 main.go:141] libmachine: STDOUT: 
	I1003 17:27:24.208292    2898 main.go:141] libmachine: STDERR: 
	I1003 17:27:24.208296    2898 main.go:141] libmachine: Attempt 0
	I1003 17:27:24.208309    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:24.208371    2898 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:27:24.208394    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:27:24.208401    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:27:24.208412    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:27:24.208418    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:27:26.210819    2898 main.go:141] libmachine: Attempt 1
	I1003 17:27:26.211016    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:26.211369    2898 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:27:26.211419    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:27:26.211460    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:27:26.211495    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:27:26.211523    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:27:28.213659    2898 main.go:141] libmachine: Attempt 2
	I1003 17:27:28.213689    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:28.213756    2898 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:27:28.213768    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:27:28.213773    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:27:28.213778    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:27:28.213782    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:27:30.215786    2898 main.go:141] libmachine: Attempt 3
	I1003 17:27:30.215793    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:30.215827    2898 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:27:30.215836    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:27:30.215842    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:27:30.215848    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:27:30.215855    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:27:32.217874    2898 main.go:141] libmachine: Attempt 4
	I1003 17:27:32.217894    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:32.217939    2898 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:27:32.217948    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:27:32.217954    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:27:32.217959    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:27:32.217965    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:27:34.219983    2898 main.go:141] libmachine: Attempt 5
	I1003 17:27:34.220009    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:34.220065    2898 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1003 17:27:34.220076    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:12:90:df:54:86:bb ID:1,12:90:df:54:86:bb Lease:0x651e02ce}
	I1003 17:27:34.220081    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3a:42:d7:84:21:44 ID:1,3a:42:d7:84:21:44 Lease:0x651e020b}
	I1003 17:27:34.220086    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a:c5:28:37:53:67 ID:1,a:c5:28:37:53:67 Lease:0x651cb07e}
	I1003 17:27:34.220092    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:56:68:9c:60:58:22 ID:1,56:68:9c:60:58:22 Lease:0x651cb05b}
	I1003 17:27:36.221307    2898 main.go:141] libmachine: Attempt 6
	I1003 17:27:36.221368    2898 main.go:141] libmachine: Searching for 5a:b3:d4:48:b1:1b in /var/db/dhcpd_leases ...
	I1003 17:27:36.221497    2898 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1003 17:27:36.221512    2898 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5a:b3:d4:48:b1:1b ID:1,5a:b3:d4:48:b1:1b Lease:0x651e02f7}
	I1003 17:27:36.221518    2898 main.go:141] libmachine: Found match: 5a:b3:d4:48:b1:1b
	I1003 17:27:36.221531    2898 main.go:141] libmachine: IP: 192.168.105.6
	I1003 17:27:36.221540    2898 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1003 17:27:38.240077    2898 machine.go:88] provisioning docker machine ...
	I1003 17:27:38.240142    2898 buildroot.go:166] provisioning hostname "ingress-addon-legacy-830000"
	I1003 17:27:38.240293    2898 main.go:141] libmachine: Using SSH client type: native
	I1003 17:27:38.241023    2898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105329e60] 0x10532c5d0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1003 17:27:38.241044    2898 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-830000 && echo "ingress-addon-legacy-830000" | sudo tee /etc/hostname
	I1003 17:27:38.344664    2898 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-830000
	
	I1003 17:27:38.344779    2898 main.go:141] libmachine: Using SSH client type: native
	I1003 17:27:38.345292    2898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105329e60] 0x10532c5d0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1003 17:27:38.345310    2898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-830000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-830000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-830000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:27:38.427379    2898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:27:38.427402    2898 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17345-986/.minikube CaCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17345-986/.minikube}
	I1003 17:27:38.427420    2898 buildroot.go:174] setting up certificates
	I1003 17:27:38.427427    2898 provision.go:83] configureAuth start
	I1003 17:27:38.427435    2898 provision.go:138] copyHostCerts
	I1003 17:27:38.427473    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem
	I1003 17:27:38.427533    2898 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem, removing ...
	I1003 17:27:38.427547    2898 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem
	I1003 17:27:38.427777    2898 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/ca.pem (1082 bytes)
	I1003 17:27:38.428057    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem
	I1003 17:27:38.428086    2898 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem, removing ...
	I1003 17:27:38.428091    2898 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem
	I1003 17:27:38.428176    2898 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/cert.pem (1123 bytes)
	I1003 17:27:38.428307    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem
	I1003 17:27:38.428341    2898 exec_runner.go:144] found /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem, removing ...
	I1003 17:27:38.428344    2898 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem
	I1003 17:27:38.428426    2898 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17345-986/.minikube/key.pem (1679 bytes)
	I1003 17:27:38.428551    2898 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-830000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-830000]
	I1003 17:27:38.494914    2898 provision.go:172] copyRemoteCerts
	I1003 17:27:38.494940    2898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:27:38.494946    2898 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/id_rsa Username:docker}
	I1003 17:27:38.530635    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 17:27:38.530681    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 17:27:38.538043    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 17:27:38.538076    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 17:27:38.545018    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 17:27:38.545053    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1003 17:27:38.551876    2898 provision.go:86] duration metric: configureAuth took 124.443583ms
	I1003 17:27:38.551884    2898 buildroot.go:189] setting minikube options for container-runtime
	I1003 17:27:38.551976    2898 config.go:182] Loaded profile config "ingress-addon-legacy-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 17:27:38.552012    2898 main.go:141] libmachine: Using SSH client type: native
	I1003 17:27:38.552230    2898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105329e60] 0x10532c5d0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1003 17:27:38.552235    2898 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 17:27:38.617996    2898 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 17:27:38.618005    2898 buildroot.go:70] root file system type: tmpfs
	I1003 17:27:38.618061    2898 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 17:27:38.618112    2898 main.go:141] libmachine: Using SSH client type: native
	I1003 17:27:38.618372    2898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105329e60] 0x10532c5d0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1003 17:27:38.618409    2898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 17:27:38.690082    2898 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 17:27:38.690135    2898 main.go:141] libmachine: Using SSH client type: native
	I1003 17:27:38.690379    2898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105329e60] 0x10532c5d0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1003 17:27:38.690388    2898 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 17:27:39.019753    2898 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 17:27:39.019764    2898 machine.go:91] provisioned docker machine in 779.674792ms
	I1003 17:27:39.019769    2898 client.go:171] LocalClient.Create took 15.071044625s
	I1003 17:27:39.019780    2898 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-830000" took 15.071084292s
	I1003 17:27:39.019787    2898 start.go:300] post-start starting for "ingress-addon-legacy-830000" (driver="qemu2")
	I1003 17:27:39.019791    2898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:27:39.019848    2898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:27:39.019860    2898 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/id_rsa Username:docker}
	I1003 17:27:39.055404    2898 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:27:39.056816    2898 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 17:27:39.056824    2898 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/addons for local assets ...
	I1003 17:27:39.056893    2898 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17345-986/.minikube/files for local assets ...
	I1003 17:27:39.056992    2898 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem -> 14472.pem in /etc/ssl/certs
	I1003 17:27:39.056997    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem -> /etc/ssl/certs/14472.pem
	I1003 17:27:39.057100    2898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 17:27:39.059632    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem --> /etc/ssl/certs/14472.pem (1708 bytes)
	I1003 17:27:39.066882    2898 start.go:303] post-start completed in 47.091959ms
	I1003 17:27:39.067238    2898 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/config.json ...
	I1003 17:27:39.067412    2898 start.go:128] duration metric: createHost completed in 15.138895541s
	I1003 17:27:39.067435    2898 main.go:141] libmachine: Using SSH client type: native
	I1003 17:27:39.067650    2898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105329e60] 0x10532c5d0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1003 17:27:39.067654    2898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1003 17:27:39.136140    2898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696379259.547245960
	
	I1003 17:27:39.136146    2898 fix.go:206] guest clock: 1696379259.547245960
	I1003 17:27:39.136150    2898 fix.go:219] Guest: 2023-10-03 17:27:39.54724596 -0700 PDT Remote: 2023-10-03 17:27:39.067415 -0700 PDT m=+24.354435417 (delta=479.83096ms)
	I1003 17:27:39.136163    2898 fix.go:190] guest clock delta is within tolerance: 479.83096ms
	I1003 17:27:39.136165    2898 start.go:83] releasing machines lock for "ingress-addon-legacy-830000", held for 15.207699917s
	I1003 17:27:39.136432    2898 ssh_runner.go:195] Run: cat /version.json
	I1003 17:27:39.136442    2898 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/id_rsa Username:docker}
	I1003 17:27:39.136432    2898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:27:39.136473    2898 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/id_rsa Username:docker}
	I1003 17:27:39.216648    2898 ssh_runner.go:195] Run: systemctl --version
	I1003 17:27:39.218676    2898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:27:39.220596    2898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:27:39.220629    2898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1003 17:27:39.224026    2898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1003 17:27:39.229464    2898 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:27:39.229473    2898 start.go:469] detecting cgroup driver to use...
	I1003 17:27:39.229539    2898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:27:39.237011    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1003 17:27:39.240001    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 17:27:39.243199    2898 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 17:27:39.243225    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 17:27:39.246444    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:27:39.249489    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 17:27:39.252358    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 17:27:39.255274    2898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:27:39.258421    2898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 17:27:39.261394    2898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:27:39.264016    2898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:27:39.267076    2898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:27:39.349896    2898 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 17:27:39.359369    2898 start.go:469] detecting cgroup driver to use...
	I1003 17:27:39.359433    2898 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 17:27:39.364873    2898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:27:39.369841    2898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:27:39.375417    2898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:27:39.380309    2898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:27:39.385130    2898 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 17:27:39.451380    2898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 17:27:39.456893    2898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:27:39.462720    2898 ssh_runner.go:195] Run: which cri-dockerd
	I1003 17:27:39.464126    2898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 17:27:39.467108    2898 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 17:27:39.472020    2898 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 17:27:39.532637    2898 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 17:27:39.592957    2898 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 17:27:39.593028    2898 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 17:27:39.599144    2898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:27:39.661235    2898 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:27:40.821652    2898 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160422417s)
	I1003 17:27:40.821718    2898 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:27:40.831346    2898 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 17:27:40.847089    2898 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1003 17:27:40.847214    2898 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1003 17:27:40.848542    2898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:27:40.852418    2898 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1003 17:27:40.852457    2898 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:27:40.857718    2898 docker.go:664] Got preloaded images: 
	I1003 17:27:40.857725    2898 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1003 17:27:40.857767    2898 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:27:40.860577    2898 ssh_runner.go:195] Run: which lz4
	I1003 17:27:40.861661    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1003 17:27:40.861736    2898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1003 17:27:40.862930    2898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 17:27:40.862938    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1003 17:27:42.513546    2898 docker.go:628] Took 1.651860 seconds to copy over tarball
	I1003 17:27:42.513605    2898 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 17:27:43.818576    2898 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.304970958s)
	I1003 17:27:43.818632    2898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 17:27:43.842159    2898 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 17:27:43.846437    2898 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1003 17:27:43.853581    2898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:27:43.921670    2898 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 17:27:45.411491    2898 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.489833916s)
	I1003 17:27:45.411593    2898 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 17:27:45.417627    2898 docker.go:664] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1003 17:27:45.417637    2898 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1003 17:27:45.417642    2898 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 17:27:45.425104    2898 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1003 17:27:45.425147    2898 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:27:45.425186    2898 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 17:27:45.425232    2898 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1003 17:27:45.425425    2898 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 17:27:45.430252    2898 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 17:27:45.431100    2898 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 17:27:45.431225    2898 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1003 17:27:45.435826    2898 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:27:45.439429    2898 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1003 17:27:45.439617    2898 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 17:27:45.439650    2898 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1003 17:27:45.439649    2898 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1003 17:27:45.439676    2898 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 17:27:45.440170    2898 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 17:27:45.440322    2898 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W1003 17:27:46.331819    2898 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:46.331935    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1003 17:27:46.338451    2898 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1003 17:27:46.338488    2898 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
	I1003 17:27:46.338544    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1003 17:27:46.344583    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W1003 17:27:46.375224    2898 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:46.375364    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1003 17:27:46.389065    2898 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1003 17:27:46.389088    2898 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1003 17:27:46.389136    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1003 17:27:46.395223    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W1003 17:27:46.413107    2898 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:46.413200    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:27:46.419203    2898 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 17:27:46.419227    2898 docker.go:317] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:27:46.419267    2898 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:27:46.429731    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 17:27:46.541582    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1003 17:27:46.547710    2898 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1003 17:27:46.547735    2898 docker.go:317] Removing image: registry.k8s.io/pause:3.2
	I1003 17:27:46.547786    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1003 17:27:46.553656    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1003 17:27:46.753696    2898 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:46.753820    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1003 17:27:46.759908    2898 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1003 17:27:46.759936    2898 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1003 17:27:46.759977    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1003 17:27:46.765805    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W1003 17:27:46.976087    2898 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:46.976209    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1003 17:27:46.983260    2898 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1003 17:27:46.983285    2898 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1003 17:27:46.983327    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1003 17:27:46.989619    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W1003 17:27:47.252468    2898 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:47.252586    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1003 17:27:47.258752    2898 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1003 17:27:47.258776    2898 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1003 17:27:47.258827    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1003 17:27:47.264095    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W1003 17:27:47.420785    2898 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1003 17:27:47.421009    2898 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 17:27:47.439338    2898 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1003 17:27:47.439392    2898 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 17:27:47.439486    2898 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1003 17:27:47.450733    2898 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1003 17:27:47.450797    2898 cache_images.go:92] LoadImages completed in 2.033187916s
	W1003 17:27:47.450879    2898 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1003 17:27:47.450966    2898 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 17:27:47.464052    2898 cni.go:84] Creating CNI manager for ""
	I1003 17:27:47.464068    2898 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:27:47.464097    2898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1003 17:27:47.464111    2898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-830000 NodeName:ingress-addon-legacy-830000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1003 17:27:47.464237    2898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-830000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:27:47.464288    2898 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-830000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1003 17:27:47.464362    2898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1003 17:27:47.468610    2898 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:27:47.468659    2898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:27:47.472245    2898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I1003 17:27:47.478693    2898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1003 17:27:47.484492    2898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I1003 17:27:47.489940    2898 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I1003 17:27:47.491215    2898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:27:47.495002    2898 certs.go:56] Setting up /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000 for IP: 192.168.105.6
	I1003 17:27:47.495014    2898 certs.go:190] acquiring lock for shared ca certs: {Name:mk60f926c1ccb065a30406b60af36acc708e601e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.495138    2898 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key
	I1003 17:27:47.495178    2898 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key
	I1003 17:27:47.495212    2898 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.key
	I1003 17:27:47.495219    2898 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt with IP's: []
	I1003 17:27:47.639697    2898 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt ...
	I1003 17:27:47.639702    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: {Name:mk9c549e6c124637e6104453f9284b489b4265cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.639973    2898 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.key ...
	I1003 17:27:47.639976    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.key: {Name:mkebfcf2f385cbc5518d4cf74cc70efb38c95a31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.640109    2898 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key.b354f644
	I1003 17:27:47.640119    2898 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I1003 17:27:47.741154    2898 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt.b354f644 ...
	I1003 17:27:47.741158    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt.b354f644: {Name:mkb2350e87ced9dc1f09b50a396504feb1a5bf83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.741299    2898 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key.b354f644 ...
	I1003 17:27:47.741303    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key.b354f644: {Name:mk2db85b80907ce3dda0f73231407cdc70e6bc66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.741407    2898 certs.go:337] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt
	I1003 17:27:47.741650    2898 certs.go:341] copying /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key
	I1003 17:27:47.741786    2898 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.key
	I1003 17:27:47.741796    2898 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.crt with IP's: []
	I1003 17:27:47.879878    2898 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.crt ...
	I1003 17:27:47.879883    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.crt: {Name:mk84f96b168149ef088b7d52304ae3b1002eef21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.880047    2898 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.key ...
	I1003 17:27:47.880050    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.key: {Name:mkf14a38a8a05f3499fd69025682e68ddee02e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:27:47.880189    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 17:27:47.880203    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 17:27:47.880217    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 17:27:47.880229    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 17:27:47.880240    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 17:27:47.880254    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 17:27:47.880263    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 17:27:47.880277    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 17:27:47.880365    2898 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447.pem (1338 bytes)
	W1003 17:27:47.880401    2898 certs.go:433] ignoring /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447_empty.pem, impossibly tiny 0 bytes
	I1003 17:27:47.880409    2898 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:27:47.880433    2898 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem (1082 bytes)
	I1003 17:27:47.880455    2898 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:27:47.880480    2898 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/Users/jenkins/minikube-integration/17345-986/.minikube/certs/key.pem (1679 bytes)
	I1003 17:27:47.880538    2898 certs.go:437] found cert: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem (1708 bytes)
	I1003 17:27:47.880567    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447.pem -> /usr/share/ca-certificates/1447.pem
	I1003 17:27:47.880581    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem -> /usr/share/ca-certificates/14472.pem
	I1003 17:27:47.880592    2898 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:47.880973    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1003 17:27:47.889247    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 17:27:47.896223    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:27:47.902929    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 17:27:47.910215    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:27:47.917429    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 17:27:47.924157    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:27:47.931063    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 17:27:47.938465    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/certs/1447.pem --> /usr/share/ca-certificates/1447.pem (1338 bytes)
	I1003 17:27:47.945895    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/ssl/certs/14472.pem --> /usr/share/ca-certificates/14472.pem (1708 bytes)
	I1003 17:27:47.952791    2898 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:27:47.959284    2898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:27:47.964577    2898 ssh_runner.go:195] Run: openssl version
	I1003 17:27:47.966568    2898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14472.pem && ln -fs /usr/share/ca-certificates/14472.pem /etc/ssl/certs/14472.pem"
	I1003 17:27:47.969940    2898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14472.pem
	I1003 17:27:47.971393    2898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:23 /usr/share/ca-certificates/14472.pem
	I1003 17:27:47.971416    2898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14472.pem
	I1003 17:27:47.973313    2898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14472.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 17:27:47.976221    2898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:27:47.979373    2898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:47.980934    2898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:04 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:47.980952    2898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:27:47.982735    2898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:27:47.986332    2898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1447.pem && ln -fs /usr/share/ca-certificates/1447.pem /etc/ssl/certs/1447.pem"
	I1003 17:27:47.989526    2898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1447.pem
	I1003 17:27:47.990942    2898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:23 /usr/share/ca-certificates/1447.pem
	I1003 17:27:47.990959    2898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1447.pem
	I1003 17:27:47.992853    2898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1447.pem /etc/ssl/certs/51391683.0"
	I1003 17:27:47.995714    2898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1003 17:27:47.997013    2898 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1003 17:27:47.997043    2898 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:27:47.997118    2898 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 17:27:48.002470    2898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:27:48.005663    2898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:27:48.008656    2898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:27:48.011350    2898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:27:48.011370    2898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1003 17:27:48.034665    2898 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1003 17:27:48.034751    2898 kubeadm.go:322] [preflight] Running pre-flight checks
	I1003 17:27:48.117744    2898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:27:48.117817    2898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:27:48.117896    2898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 17:27:48.167671    2898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:27:48.168727    2898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:27:48.168764    2898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1003 17:27:48.237015    2898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:27:48.247240    2898 out.go:204]   - Generating certificates and keys ...
	I1003 17:27:48.247278    2898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1003 17:27:48.247307    2898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1003 17:27:48.365741    2898 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:27:48.701951    2898 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:27:48.848797    2898 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:27:49.027849    2898 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1003 17:27:49.116266    2898 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1003 17:27:49.116336    2898 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-830000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1003 17:27:49.323183    2898 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1003 17:27:49.323268    2898 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-830000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1003 17:27:49.413351    2898 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:27:49.475234    2898 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:27:49.574777    2898 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1003 17:27:49.574832    2898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:27:49.725956    2898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:27:49.763054    2898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:27:49.869403    2898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:27:50.071641    2898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:27:50.071903    2898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:27:50.081117    2898 out.go:204]   - Booting up control plane ...
	I1003 17:27:50.081194    2898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:27:50.081240    2898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:27:50.081300    2898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:27:50.081350    2898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:27:50.081433    2898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 17:28:00.579114    2898 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.501581 seconds
	I1003 17:28:00.579172    2898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 17:28:00.584060    2898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 17:28:01.109371    2898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 17:28:01.109599    2898 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-830000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1003 17:28:01.616407    2898 kubeadm.go:322] [bootstrap-token] Using token: g14azx.syti5irgsx9e830c
	I1003 17:28:01.622552    2898 out.go:204]   - Configuring RBAC rules ...
	I1003 17:28:01.622631    2898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 17:28:01.622700    2898 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 17:28:01.630334    2898 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 17:28:01.631212    2898 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 17:28:01.632225    2898 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 17:28:01.633457    2898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 17:28:01.636634    2898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 17:28:01.824927    2898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1003 17:28:02.030750    2898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1003 17:28:02.031357    2898 kubeadm.go:322] 
	I1003 17:28:02.031402    2898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1003 17:28:02.031409    2898 kubeadm.go:322] 
	I1003 17:28:02.031464    2898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1003 17:28:02.031473    2898 kubeadm.go:322] 
	I1003 17:28:02.031493    2898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1003 17:28:02.031557    2898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 17:28:02.031597    2898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 17:28:02.031604    2898 kubeadm.go:322] 
	I1003 17:28:02.031642    2898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1003 17:28:02.031709    2898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 17:28:02.031766    2898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 17:28:02.031772    2898 kubeadm.go:322] 
	I1003 17:28:02.031833    2898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 17:28:02.031904    2898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1003 17:28:02.031911    2898 kubeadm.go:322] 
	I1003 17:28:02.031987    2898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g14azx.syti5irgsx9e830c \
	I1003 17:28:02.032071    2898 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 \
	I1003 17:28:02.032090    2898 kubeadm.go:322]     --control-plane 
	I1003 17:28:02.032094    2898 kubeadm.go:322] 
	I1003 17:28:02.032156    2898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1003 17:28:02.032163    2898 kubeadm.go:322] 
	I1003 17:28:02.032218    2898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g14azx.syti5irgsx9e830c \
	I1003 17:28:02.032306    2898 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9490a7a2ce6f448c9720863afea1604779c0ef0543f588db367e732362807037 
	I1003 17:28:02.032521    2898 kubeadm.go:322] W1004 00:27:48.445874    1425 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1003 17:28:02.032669    2898 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1003 17:28:02.032777    2898 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1003 17:28:02.032854    2898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:28:02.032969    2898 kubeadm.go:322] W1004 00:27:50.486702    1425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 17:28:02.033059    2898 kubeadm.go:322] W1004 00:27:50.487499    1425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1003 17:28:02.033070    2898 cni.go:84] Creating CNI manager for ""
	I1003 17:28:02.033091    2898 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:28:02.033107    2898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 17:28:02.033196    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:02.033197    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a minikube.k8s.io/name=ingress-addon-legacy-830000 minikube.k8s.io/updated_at=2023_10_03T17_28_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:02.037492    2898 ops.go:34] apiserver oom_adj: -16
	I1003 17:28:02.107184    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:02.145562    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:02.682712    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:03.182800    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:03.682735    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:04.182777    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:04.682716    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:05.182677    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:05.682790    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:06.182603    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:06.682616    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:07.182606    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:07.682435    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:08.182582    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:08.682419    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:09.182405    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:09.682397    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:10.182529    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:10.682330    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:11.182474    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:11.682457    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:12.181668    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:12.682519    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:13.182482    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:13.682475    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:14.182437    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:14.682555    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:15.182382    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:15.682492    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:16.182463    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:16.682467    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:17.182442    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:17.682284    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:18.182468    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:18.682108    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:19.182152    2898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:28:19.235636    2898 kubeadm.go:1081] duration metric: took 17.202850292s to wait for elevateKubeSystemPrivileges.
	I1003 17:28:19.235652    2898 kubeadm.go:406] StartCluster complete in 31.239233834s
	I1003 17:28:19.235662    2898 settings.go:142] acquiring lock: {Name:mkad5f21e92defa14247d9a0adf05a6e4272cec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:28:19.235751    2898 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:28:19.236108    2898 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/kubeconfig: {Name:mke3e06a6a2057954076f4772b87ca4469721c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:28:19.236357    2898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 17:28:19.236402    2898 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1003 17:28:19.236440    2898 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-830000"
	I1003 17:28:19.236448    2898 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-830000"
	I1003 17:28:19.236472    2898 host.go:66] Checking if "ingress-addon-legacy-830000" exists ...
	I1003 17:28:19.236507    2898 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-830000"
	I1003 17:28:19.236540    2898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-830000"
	I1003 17:28:19.236613    2898 config.go:182] Loaded profile config "ingress-addon-legacy-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1003 17:28:19.236618    2898 kapi.go:59] client config for ingress-addon-legacy-830000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.key", CAFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065efac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1003 17:28:19.236777    2898 host.go:54] host status for "ingress-addon-legacy-830000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/monitor: connect: connection refused
	W1003 17:28:19.236787    2898 addons.go:277] "ingress-addon-legacy-830000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I1003 17:28:19.237031    2898 cert_rotation.go:137] Starting client certificate rotation controller
	I1003 17:28:19.237535    2898 kapi.go:59] client config for ingress-addon-legacy-830000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.key", CAFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065efac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 17:28:19.237682    2898 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-830000"
	I1003 17:28:19.237691    2898 host.go:66] Checking if "ingress-addon-legacy-830000" exists ...
	I1003 17:28:19.238313    2898 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 17:28:19.238319    2898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 17:28:19.238325    2898 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/ingress-addon-legacy-830000/id_rsa Username:docker}
	I1003 17:28:19.263304    2898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-830000" context rescaled to 1 replicas
	I1003 17:28:19.263330    2898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:28:19.268641    2898 out.go:177] * Verifying Kubernetes components...
	I1003 17:28:19.276723    2898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:28:19.307503    2898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 17:28:19.352851    2898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 17:28:19.353132    2898 kapi.go:59] client config for ingress-addon-legacy-830000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.key", CAFile:"/Users/jenkins/minikube-integration/17345-986/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065efac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 17:28:19.353273    2898 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-830000" to be "Ready" ...
	I1003 17:28:19.357258    2898 node_ready.go:49] node "ingress-addon-legacy-830000" has status "Ready":"True"
	I1003 17:28:19.357264    2898 node_ready.go:38] duration metric: took 3.982208ms waiting for node "ingress-addon-legacy-830000" to be "Ready" ...
	I1003 17:28:19.357268    2898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:28:19.361382    2898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-pwcn8" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:19.527418    2898 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1003 17:28:19.533356    2898 addons.go:502] enable addons completed in 296.962125ms: enabled=[storage-provisioner default-storageclass]
	I1003 17:28:19.538695    2898 start.go:923] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1003 17:28:21.371629    2898 pod_ready.go:102] pod "coredns-66bff467f8-pwcn8" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-03 17:28:18 -0700 PDT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1003 17:28:23.377988    2898 pod_ready.go:102] pod "coredns-66bff467f8-pwcn8" in "kube-system" namespace has status "Ready":"False"
	I1003 17:28:24.871146    2898 pod_ready.go:92] pod "coredns-66bff467f8-pwcn8" in "kube-system" namespace has status "Ready":"True"
	I1003 17:28:24.871160    2898 pod_ready.go:81] duration metric: took 5.509881375s waiting for pod "coredns-66bff467f8-pwcn8" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.871167    2898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.874763    2898 pod_ready.go:92] pod "etcd-ingress-addon-legacy-830000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:28:24.874773    2898 pod_ready.go:81] duration metric: took 3.599ms waiting for pod "etcd-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.874780    2898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.878307    2898 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-830000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:28:24.878314    2898 pod_ready.go:81] duration metric: took 3.528875ms waiting for pod "kube-apiserver-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.878318    2898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.881613    2898 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-830000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:28:24.881621    2898 pod_ready.go:81] duration metric: took 3.298541ms waiting for pod "kube-controller-manager-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.881627    2898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sx44q" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.884305    2898 pod_ready.go:92] pod "kube-proxy-sx44q" in "kube-system" namespace has status "Ready":"True"
	I1003 17:28:24.884315    2898 pod_ready.go:81] duration metric: took 2.683292ms waiting for pod "kube-proxy-sx44q" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:24.884320    2898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:25.067638    2898 request.go:629] Waited for 183.167125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-830000
	I1003 17:28:25.267627    2898 request.go:629] Waited for 193.924083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-830000
	I1003 17:28:25.275212    2898 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-830000" in "kube-system" namespace has status "Ready":"True"
	I1003 17:28:25.275247    2898 pod_ready.go:81] duration metric: took 390.924417ms waiting for pod "kube-scheduler-ingress-addon-legacy-830000" in "kube-system" namespace to be "Ready" ...
	I1003 17:28:25.275295    2898 pod_ready.go:38] duration metric: took 5.918134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 17:28:25.275385    2898 api_server.go:52] waiting for apiserver process to appear ...
	I1003 17:28:25.275647    2898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 17:28:25.292418    2898 api_server.go:72] duration metric: took 6.029179292s to wait for apiserver process to appear ...
	I1003 17:28:25.292447    2898 api_server.go:88] waiting for apiserver healthz status ...
	I1003 17:28:25.292472    2898 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I1003 17:28:25.305594    2898 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I1003 17:28:25.306703    2898 api_server.go:141] control plane version: v1.18.20
	I1003 17:28:25.306719    2898 api_server.go:131] duration metric: took 14.261625ms to wait for apiserver health ...
	I1003 17:28:25.306727    2898 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 17:28:25.467547    2898 request.go:629] Waited for 160.738583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1003 17:28:25.480788    2898 system_pods.go:59] 6 kube-system pods found
	I1003 17:28:25.480847    2898 system_pods.go:61] "coredns-66bff467f8-pwcn8" [55f835d6-3c71-41cd-acab-ea2a9bcbe5f8] Running
	I1003 17:28:25.480860    2898 system_pods.go:61] "etcd-ingress-addon-legacy-830000" [19b336f1-5131-433c-871f-41149648f87f] Running
	I1003 17:28:25.480872    2898 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-830000" [ffb19401-5518-4b69-8343-83b5c70b31f7] Running
	I1003 17:28:25.480882    2898 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-830000" [f75036e7-4297-4399-96cf-41007f976451] Running
	I1003 17:28:25.480895    2898 system_pods.go:61] "kube-proxy-sx44q" [61c337c4-dcb9-4838-aa8f-2121c61ecf55] Running
	I1003 17:28:25.480930    2898 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-830000" [55312e6e-9695-4d3c-b29e-dd96bec8ee0e] Running
	I1003 17:28:25.480945    2898 system_pods.go:74] duration metric: took 174.212084ms to wait for pod list to return data ...
	I1003 17:28:25.480959    2898 default_sa.go:34] waiting for default service account to be created ...
	I1003 17:28:25.667496    2898 request.go:629] Waited for 186.41525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I1003 17:28:25.673384    2898 default_sa.go:45] found service account: "default"
	I1003 17:28:25.673417    2898 default_sa.go:55] duration metric: took 192.448417ms for default service account to be created ...
	I1003 17:28:25.673436    2898 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 17:28:25.867607    2898 request.go:629] Waited for 194.028417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1003 17:28:25.881574    2898 system_pods.go:86] 6 kube-system pods found
	I1003 17:28:25.881613    2898 system_pods.go:89] "coredns-66bff467f8-pwcn8" [55f835d6-3c71-41cd-acab-ea2a9bcbe5f8] Running
	I1003 17:28:25.881625    2898 system_pods.go:89] "etcd-ingress-addon-legacy-830000" [19b336f1-5131-433c-871f-41149648f87f] Running
	I1003 17:28:25.881636    2898 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-830000" [ffb19401-5518-4b69-8343-83b5c70b31f7] Running
	I1003 17:28:25.881645    2898 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-830000" [f75036e7-4297-4399-96cf-41007f976451] Running
	I1003 17:28:25.881655    2898 system_pods.go:89] "kube-proxy-sx44q" [61c337c4-dcb9-4838-aa8f-2121c61ecf55] Running
	I1003 17:28:25.881664    2898 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-830000" [55312e6e-9695-4d3c-b29e-dd96bec8ee0e] Running
	I1003 17:28:25.881680    2898 system_pods.go:126] duration metric: took 208.237625ms to wait for k8s-apps to be running ...
	I1003 17:28:25.881696    2898 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 17:28:25.881853    2898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:28:25.897603    2898 system_svc.go:56] duration metric: took 15.899042ms WaitForService to wait for kubelet.
	I1003 17:28:25.897630    2898 kubeadm.go:581] duration metric: took 6.634418334s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1003 17:28:25.897656    2898 node_conditions.go:102] verifying NodePressure condition ...
	I1003 17:28:26.067575    2898 request.go:629] Waited for 169.783917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I1003 17:28:26.075947    2898 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1003 17:28:26.076005    2898 node_conditions.go:123] node cpu capacity is 2
	I1003 17:28:26.076029    2898 node_conditions.go:105] duration metric: took 178.367542ms to run NodePressure ...
	I1003 17:28:26.076054    2898 start.go:228] waiting for startup goroutines ...
	I1003 17:28:26.076072    2898 start.go:233] waiting for cluster config update ...
	I1003 17:28:26.076102    2898 start.go:242] writing updated cluster config ...
	I1003 17:28:26.077304    2898 ssh_runner.go:195] Run: rm -f paused
	I1003 17:28:26.140949    2898 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I1003 17:28:26.144285    2898 out.go:177] 
	W1003 17:28:26.148396    2898 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1003 17:28:26.152288    2898 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1003 17:28:26.160332    2898 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-830000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-04 00:27:35 UTC, ends at Wed 2023-10-04 00:29:36 UTC. --
	Oct 04 00:29:07 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:07.777839178Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:29:07 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:07.777960690Z" level=info msg="ignoring event" container=1714f42e7df4560945f9323cf249a9b8f24db77b3f8b93e456f66d4b44600859 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.734307475Z" level=info msg="shim disconnected" id=f6d1adf412329e2086ae045d703254a7cc616b8fa8a7335408d7e6009d910477 namespace=moby
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.734337061Z" level=warning msg="cleaning up after shim disconnected" id=f6d1adf412329e2086ae045d703254a7cc616b8fa8a7335408d7e6009d910477 namespace=moby
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.734341353Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:20.738936721Z" level=info msg="ignoring event" container=f6d1adf412329e2086ae045d703254a7cc616b8fa8a7335408d7e6009d910477 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.745462834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.745522713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.745533881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.745542798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:20.774876702Z" level=info msg="ignoring event" container=219ca321a595c36388139af0829502ca69c3b58ea8ec1d7d5acaabddb76f51ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.775148260Z" level=info msg="shim disconnected" id=219ca321a595c36388139af0829502ca69c3b58ea8ec1d7d5acaabddb76f51ae namespace=moby
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.775190804Z" level=warning msg="cleaning up after shim disconnected" id=219ca321a595c36388139af0829502ca69c3b58ea8ec1d7d5acaabddb76f51ae namespace=moby
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.775195263Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:29:20 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:20.779176302Z" level=warning msg="cleanup warnings time=\"2023-10-04T00:29:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:31.227394869Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=74879b6ac42dce1e20d89b787a9e4f1de707326cd7d1a58d4c3d637b87c018ca
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:31.232539372Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=74879b6ac42dce1e20d89b787a9e4f1de707326cd7d1a58d4c3d637b87c018ca
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:31.322108528Z" level=info msg="ignoring event" container=74879b6ac42dce1e20d89b787a9e4f1de707326cd7d1a58d4c3d637b87c018ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:31.322541175Z" level=info msg="shim disconnected" id=74879b6ac42dce1e20d89b787a9e4f1de707326cd7d1a58d4c3d637b87c018ca namespace=moby
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:31.322604178Z" level=warning msg="cleaning up after shim disconnected" id=74879b6ac42dce1e20d89b787a9e4f1de707326cd7d1a58d4c3d637b87c018ca namespace=moby
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:31.322614678Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:31.357626526Z" level=info msg="shim disconnected" id=429536b6bf4dd1adaf2a190f24819b7145723eabe4f6a8f0ff44de064b9a1fd1 namespace=moby
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1091]: time="2023-10-04T00:29:31.357695362Z" level=info msg="ignoring event" container=429536b6bf4dd1adaf2a190f24819b7145723eabe4f6a8f0ff44de064b9a1fd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:31.357921624Z" level=warning msg="cleaning up after shim disconnected" id=429536b6bf4dd1adaf2a190f24819b7145723eabe4f6a8f0ff44de064b9a1fd1 namespace=moby
	Oct 04 00:29:31 ingress-addon-legacy-830000 dockerd[1097]: time="2023-10-04T00:29:31.357976960Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	219ca321a595   97e050c3e21e                               "/hello-app"             16 seconds ago       Exited (1) 15 seconds ago                 k8s_hello-world-app_hello-world-app-5f5d8b66bb-5kfwl_default_bd49be79-c5c6-4a86-a44c-c5f3c73b6e13_2
	274da8fac8c4   k8s.gcr.io/pause:3.2                       "/pause"                 31 seconds ago       Up 30 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-5kfwl_default_bd49be79-c5c6-4a86-a44c-c5f3c73b6e13_0
	111681ed7828   nginx                                      "/docker-entrypoint.…"   38 seconds ago       Up 37 seconds                             k8s_nginx_nginx_default_c86e5cc9-0af5-444e-a040-dd4949406f76_0
	acd82d8a424f   k8s.gcr.io/pause:3.2                       "/pause"                 41 seconds ago       Up 40 seconds                             k8s_POD_nginx_default_c86e5cc9-0af5-444e-a040-dd4949406f76_0
	f6d1adf41232   k8s.gcr.io/pause:3.2                       "/pause"                 52 seconds ago       Exited (0) 15 seconds ago                 k8s_POD_kube-ingress-dns-minikube_kube-system_0a2a6259-375f-4de5-aed7-6b3c959b1d6b_0
	74879b6ac42d   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   54 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-ztws4_ingress-nginx_5ec77258-32fd-4b56-ada7-e757ac992488_0
	429536b6bf4d   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-ztws4_ingress-nginx_5ec77258-32fd-4b56-ada7-e757ac992488_0
	02a7b6eddf7a   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-rnp6r_ingress-nginx_19c45c2f-b9e5-4e5a-8896-1a7b83c2d4a9_0
	b0ae3c3aa8f4   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-55lvf_ingress-nginx_17ef58f2-3fee-470f-bfb7-5cfe0d2cfad8_0
	6ac8bec4295d   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-rnp6r_ingress-nginx_19c45c2f-b9e5-4e5a-8896-1a7b83c2d4a9_0
	bb322154979f   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-55lvf_ingress-nginx_17ef58f2-3fee-470f-bfb7-5cfe0d2cfad8_0
	9cb0e535e8d9   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-pwcn8_kube-system_55f835d6-3c71-41cd-acab-ea2a9bcbe5f8_0
	99caf5a0b10b   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-pwcn8_kube-system_55f835d6-3c71-41cd-acab-ea2a9bcbe5f8_0
	94214e5d2b8a   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-sx44q_kube-system_61c337c4-dcb9-4838-aa8f-2121c61ecf55_0
	8fd5b6ace83d   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-sx44q_kube-system_61c337c4-dcb9-4838-aa8f-2121c61ecf55_0
	bb5a2852fdcb   68a4fac29a86                               "kube-controller-man…"   About a minute ago   Up About a minute                         k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-830000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	242a19d6d32c   095f37015706                               "kube-scheduler --au…"   About a minute ago   Up About a minute                         k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-830000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	b75a81ed709f   2694cf044d66                               "kube-apiserver --ad…"   About a minute ago   Up About a minute                         k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-830000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	c586851782e6   ab707b0a0ea3                               "etcd --advertise-cl…"   About a minute ago   Up About a minute                         k8s_etcd_etcd-ingress-addon-legacy-830000_kube-system_5bb93dc35c8503c3fc444c92b440ab41_0
	0140bb57a517   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-scheduler-ingress-addon-legacy-830000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	920b605a76ea   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-controller-manager-ingress-addon-legacy-830000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	34f9e2af0568   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_etcd-ingress-addon-legacy-830000_kube-system_5bb93dc35c8503c3fc444c92b440ab41_0
	626addbb8263   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-apiserver-ingress-addon-legacy-830000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	time="2023-10-04T00:29:36Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [9cb0e535e8d9] <==
	* [INFO] 172.17.0.1:19590 - 44902 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033003s
	[INFO] 172.17.0.1:19590 - 65069 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033128s
	[INFO] 172.17.0.1:48359 - 47074 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009376s
	[INFO] 172.17.0.1:19590 - 26581 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000029836s
	[INFO] 172.17.0.1:19590 - 65139 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028086s
	[INFO] 172.17.0.1:48359 - 33792 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011376s
	[INFO] 172.17.0.1:19590 - 17919 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026336s
	[INFO] 172.17.0.1:48359 - 40296 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012376s
	[INFO] 172.17.0.1:48359 - 36231 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013001s
	[INFO] 172.17.0.1:19590 - 38488 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062714s
	[INFO] 172.17.0.1:48359 - 13080 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009709s
	[INFO] 172.17.0.1:33218 - 2663 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042545s
	[INFO] 172.17.0.1:29824 - 48172 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064923s
	[INFO] 172.17.0.1:33218 - 51167 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000023252s
	[INFO] 172.17.0.1:29824 - 37401 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013292s
	[INFO] 172.17.0.1:29824 - 6528 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032962s
	[INFO] 172.17.0.1:33218 - 38324 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043879s
	[INFO] 172.17.0.1:29824 - 50376 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013543s
	[INFO] 172.17.0.1:29824 - 47911 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013793s
	[INFO] 172.17.0.1:33218 - 37112 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051421s
	[INFO] 172.17.0.1:33218 - 37200 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012835s
	[INFO] 172.17.0.1:29824 - 63791 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00000825s
	[INFO] 172.17.0.1:33218 - 6438 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007334s
	[INFO] 172.17.0.1:29824 - 63864 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010584s
	[INFO] 172.17.0.1:33218 - 605 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000007667s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-830000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-830000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d9526fa6c1a1bb1b20f95e15606f1308e308d84a
	                    minikube.k8s.io/name=ingress-addon-legacy-830000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_03T17_28_02_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:27:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-830000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:29:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:29:08 +0000   Wed, 04 Oct 2023 00:27:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:29:08 +0000   Wed, 04 Oct 2023 00:27:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:29:08 +0000   Wed, 04 Oct 2023 00:27:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 00:29:08 +0000   Wed, 04 Oct 2023 00:28:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-830000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 10070b6106c64782b32d3ac472e2c7b9
	  System UUID:                10070b6106c64782b32d3ac472e2c7b9
	  Boot ID:                    3da17537-757e-4653-8d39-5de465b1be4f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-5kfwl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 coredns-66bff467f8-pwcn8                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     78s
	  kube-system                 etcd-ingress-addon-legacy-830000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-830000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-830000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-sx44q                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-830000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                 From        Message
	  ----    ------                   ----                ----        -------
	  Normal  Starting                 100s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x4 over 100s)  kubelet     Node ingress-addon-legacy-830000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x4 over 100s)  kubelet     Node ingress-addon-legacy-830000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x3 over 100s)  kubelet     Node ingress-addon-legacy-830000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 88s                 kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                 kubelet     Node ingress-addon-legacy-830000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                 kubelet     Node ingress-addon-legacy-830000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                 kubelet     Node ingress-addon-legacy-830000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                 kubelet     Node ingress-addon-legacy-830000 status is now: NodeReady
	  Normal  Starting                 77s                 kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 4 00:27] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.652808] EINJ: EINJ table not found.
	[  +0.520951] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044007] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000856] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.173003] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.057479] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.457335] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +0.184487] systemd-fstab-generator[743]: Ignoring "noauto" for root device
	[  +0.058238] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.068992] systemd-fstab-generator[856]: Ignoring "noauto" for root device
	[  +4.261634] systemd-fstab-generator[1063]: Ignoring "noauto" for root device
	[  +1.464314] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.840822] systemd-fstab-generator[1543]: Ignoring "noauto" for root device
	[  +8.317940] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.093966] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 4 00:28] systemd-fstab-generator[2626]: Ignoring "noauto" for root device
	[ +17.690280] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.536249] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.022338] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Oct 4 00:29] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [c586851782e6] <==
	* raft2023/10/04 00:27:57 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/10/04 00:27:57 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/04 00:27:57 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/10/04 00:27:57 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-10-04 00:27:58.111003 W | auth: simple token is not cryptographically signed
	2023-10-04 00:27:58.112115 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-04 00:27:58.113831 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-04 00:27:58.113903 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-04 00:27:58.119707 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-04 00:27:58.119749 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/10/04 00:27:58 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-10-04 00:27:58.127744 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/10/04 00:27:58 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/10/04 00:27:58 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/10/04 00:27:58 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/10/04 00:27:58 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/10/04 00:27:58 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-10-04 00:27:58.276092 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-04 00:27:58.276449 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-04 00:27:58.276493 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-04 00:27:58.276608 I | etcdserver: published {Name:ingress-addon-legacy-830000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-10-04 00:27:58.276635 I | embed: ready to serve client requests
	2023-10-04 00:27:58.276731 I | embed: ready to serve client requests
	2023-10-04 00:27:58.277375 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-04 00:27:58.282791 I | embed: serving client requests on 192.168.105.6:2379
	
	* 
	* ==> kernel <==
	*  00:29:36 up 2 min,  0 users,  load average: 0.66, 0.33, 0.12
	Linux ingress-addon-legacy-830000 5.10.57 #1 SMP PREEMPT Mon Sep 18 20:10:16 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b75a81ed709f] <==
	* I1004 00:27:59.885237       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 00:27:59.885255       1 cache.go:39] Caches are synced for autoregister controller
	I1004 00:27:59.887051       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1004 00:27:59.905143       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 00:27:59.936545       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1004 00:28:00.786784       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1004 00:28:00.787333       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1004 00:28:00.815011       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1004 00:28:00.820239       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1004 00:28:00.820267       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1004 00:28:00.945495       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 00:28:00.955771       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1004 00:28:01.066157       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I1004 00:28:01.066561       1 controller.go:609] quota admission added evaluator for: endpoints
	I1004 00:28:01.068066       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 00:28:02.107540       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1004 00:28:02.225517       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1004 00:28:02.436979       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1004 00:28:08.636576       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 00:28:18.909017       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1004 00:28:18.963953       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1004 00:28:26.493135       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1004 00:28:55.000683       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1004 00:29:29.155423       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1004 00:29:29.229640       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [bb5a2852fdcb] <==
	* I1004 00:28:19.060959       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1004 00:28:19.162957       1 shared_informer.go:230] Caches are synced for taint 
	I1004 00:28:19.163223       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1004 00:28:19.163412       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-830000. Assuming now as a timestamp.
	I1004 00:28:19.163508       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I1004 00:28:19.163628       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1004 00:28:19.164016       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-830000", UID:"ed99eeb5-257b-40d3-b19c-2f8e0876fbc9", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-830000 event: Registered Node ingress-addon-legacy-830000 in Controller
	I1004 00:28:19.270440       1 shared_informer.go:230] Caches are synced for namespace 
	I1004 00:28:19.290637       1 shared_informer.go:230] Caches are synced for service account 
	I1004 00:28:19.376751       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"209c2ef8-528b-444c-9555-7fe531c55d21", APIVersion:"apps/v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1004 00:28:19.383734       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"00b6f427-f956-4421-a1c9-d906defbb85f", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-mbspz
	I1004 00:28:19.417430       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1004 00:28:19.462254       1 shared_informer.go:230] Caches are synced for resource quota 
	I1004 00:28:19.464934       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1004 00:28:19.464940       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1004 00:28:19.506158       1 shared_informer.go:230] Caches are synced for job 
	I1004 00:28:19.514598       1 shared_informer.go:230] Caches are synced for resource quota 
	I1004 00:28:26.486649       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"250f87da-6e94-4e16-9b78-19c90a90ca8d", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1004 00:28:26.497180       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f1cce553-47ba-4b4b-92ee-6cac10009aef", APIVersion:"apps/v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-ztws4
	I1004 00:28:26.501771       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"cb77a5ca-c114-49a9-80bf-be3c62460c8e", APIVersion:"batch/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-55lvf
	I1004 00:28:26.528478       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ae5c4d95-a862-4f75-96fc-4a32b4eec831", APIVersion:"batch/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-rnp6r
	I1004 00:28:30.105551       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ae5c4d95-a862-4f75-96fc-4a32b4eec831", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1004 00:28:30.124080       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"cb77a5ca-c114-49a9-80bf-be3c62460c8e", APIVersion:"batch/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1004 00:29:05.285804       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"22277cd0-a6c1-4c4b-a50c-b2d9d0850d76", APIVersion:"apps/v1", ResourceVersion:"546", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1004 00:29:05.289346       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"8a394735-f721-4526-b148-cf85fd5a29c6", APIVersion:"apps/v1", ResourceVersion:"547", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-5kfwl
	
	* 
	* ==> kube-proxy [94214e5d2b8a] <==
	* W1004 00:28:19.458723       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1004 00:28:19.463374       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I1004 00:28:19.463388       1 server_others.go:186] Using iptables Proxier.
	I1004 00:28:19.463502       1 server.go:583] Version: v1.18.20
	I1004 00:28:19.465029       1 config.go:133] Starting endpoints config controller
	I1004 00:28:19.465044       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1004 00:28:19.468021       1 config.go:315] Starting service config controller
	I1004 00:28:19.468027       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1004 00:28:19.565496       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1004 00:28:19.571762       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [242a19d6d32c] <==
	* W1004 00:27:59.841809       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 00:27:59.854264       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1004 00:27:59.854277       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1004 00:27:59.855196       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 00:27:59.855237       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 00:27:59.855285       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1004 00:27:59.855311       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1004 00:27:59.856509       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 00:27:59.857055       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:27:59.857125       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:27:59.857176       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:27:59.857250       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:27:59.857303       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:27:59.857342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:27:59.857344       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 00:27:59.857402       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:27:59.857463       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 00:27:59.857503       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 00:27:59.857527       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:28:00.725586       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:28:00.854489       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 00:28:00.886756       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:28:00.920629       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1004 00:28:03.155422       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1004 00:28:18.984117       1 factory.go:503] pod: kube-system/coredns-66bff467f8-mbspz is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:27:35 UTC, ends at Wed 2023-10-04 00:29:36 UTC. --
	Oct 04 00:29:09 ingress-addon-legacy-830000 kubelet[2632]: E1004 00:29:09.563485    2632 pod_workers.go:191] Error syncing pod bd49be79-c5c6-4a86-a44c-c5f3c73b6e13 ("hello-world-app-5f5d8b66bb-5kfwl_default(bd49be79-c5c6-4a86-a44c-c5f3c73b6e13)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-5kfwl_default(bd49be79-c5c6-4a86-a44c-c5f3c73b6e13)"
	Oct 04 00:29:20 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:20.705375    2632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9e4ba315cdd0c66aeeeb38a78754c1e1009c79ed709aab990ad1b714b85199fb
	Oct 04 00:29:20 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:20.711528    2632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-fdlmb" (UniqueName: "kubernetes.io/secret/0a2a6259-375f-4de5-aed7-6b3c959b1d6b-minikube-ingress-dns-token-fdlmb") pod "0a2a6259-375f-4de5-aed7-6b3c959b1d6b" (UID: "0a2a6259-375f-4de5-aed7-6b3c959b1d6b")
	Oct 04 00:29:20 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:20.716116    2632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2a6259-375f-4de5-aed7-6b3c959b1d6b-minikube-ingress-dns-token-fdlmb" (OuterVolumeSpecName: "minikube-ingress-dns-token-fdlmb") pod "0a2a6259-375f-4de5-aed7-6b3c959b1d6b" (UID: "0a2a6259-375f-4de5-aed7-6b3c959b1d6b"). InnerVolumeSpecName "minikube-ingress-dns-token-fdlmb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 00:29:20 ingress-addon-legacy-830000 kubelet[2632]: W1004 00:29:20.739343    2632 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-5kfwl through plugin: invalid network status for
	Oct 04 00:29:20 ingress-addon-legacy-830000 kubelet[2632]: W1004 00:29:20.788212    2632 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podbd49be79-c5c6-4a86-a44c-c5f3c73b6e13/219ca321a595c36388139af0829502ca69c3b58ea8ec1d7d5acaabddb76f51ae": none of the resources are being tracked.
	Oct 04 00:29:20 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:20.811790    2632 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-fdlmb" (UniqueName: "kubernetes.io/secret/0a2a6259-375f-4de5-aed7-6b3c959b1d6b-minikube-ingress-dns-token-fdlmb") on node "ingress-addon-legacy-830000" DevicePath ""
	Oct 04 00:29:21 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:21.796793    2632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1714f42e7df4560945f9323cf249a9b8f24db77b3f8b93e456f66d4b44600859
	Oct 04 00:29:21 ingress-addon-legacy-830000 kubelet[2632]: W1004 00:29:21.798629    2632 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-5kfwl through plugin: invalid network status for
	Oct 04 00:29:21 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:21.807667    2632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 219ca321a595c36388139af0829502ca69c3b58ea8ec1d7d5acaabddb76f51ae
	Oct 04 00:29:21 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:21.816688    2632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9e4ba315cdd0c66aeeeb38a78754c1e1009c79ed709aab990ad1b714b85199fb
	Oct 04 00:29:21 ingress-addon-legacy-830000 kubelet[2632]: E1004 00:29:21.821515    2632 pod_workers.go:191] Error syncing pod bd49be79-c5c6-4a86-a44c-c5f3c73b6e13 ("hello-world-app-5f5d8b66bb-5kfwl_default(bd49be79-c5c6-4a86-a44c-c5f3c73b6e13)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-5kfwl_default(bd49be79-c5c6-4a86-a44c-c5f3c73b6e13)"
	Oct 04 00:29:22 ingress-addon-legacy-830000 kubelet[2632]: W1004 00:29:22.824314    2632 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-5kfwl through plugin: invalid network status for
	Oct 04 00:29:29 ingress-addon-legacy-830000 kubelet[2632]: E1004 00:29:29.220122    2632 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ztws4.178ac00c6a96c08d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ztws4", UID:"5ec77258-32fd-4b56-ada7-e757ac992488", APIVersion:"v1", ResourceVersion:"435", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-830000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13f4a5a4d0ae68d, ext:87026121011, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13f4a5a4d0ae68d, ext:87026121011, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ztws4.178ac00c6a96c08d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 04 00:29:29 ingress-addon-legacy-830000 kubelet[2632]: E1004 00:29:29.231408    2632 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ztws4.178ac00c6a96c08d", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ztws4", UID:"5ec77258-32fd-4b56-ada7-e757ac992488", APIVersion:"v1", ResourceVersion:"435", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-830000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13f4a5a4d0ae68d, ext:87026121011, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13f4a5a4d5f8205, ext:87031665876, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ztws4.178ac00c6a96c08d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 04 00:29:31 ingress-addon-legacy-830000 kubelet[2632]: W1004 00:29:31.960689    2632 pod_container_deletor.go:77] Container "429536b6bf4dd1adaf2a190f24819b7145723eabe4f6a8f0ff44de064b9a1fd1" not found in pod's containers
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.468402    2632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-2jtnp" (UniqueName: "kubernetes.io/secret/5ec77258-32fd-4b56-ada7-e757ac992488-ingress-nginx-token-2jtnp") pod "5ec77258-32fd-4b56-ada7-e757ac992488" (UID: "5ec77258-32fd-4b56-ada7-e757ac992488")
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.471620    2632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5ec77258-32fd-4b56-ada7-e757ac992488-webhook-cert") pod "5ec77258-32fd-4b56-ada7-e757ac992488" (UID: "5ec77258-32fd-4b56-ada7-e757ac992488")
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.477731    2632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec77258-32fd-4b56-ada7-e757ac992488-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5ec77258-32fd-4b56-ada7-e757ac992488" (UID: "5ec77258-32fd-4b56-ada7-e757ac992488"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.478625    2632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec77258-32fd-4b56-ada7-e757ac992488-ingress-nginx-token-2jtnp" (OuterVolumeSpecName: "ingress-nginx-token-2jtnp") pod "5ec77258-32fd-4b56-ada7-e757ac992488" (UID: "5ec77258-32fd-4b56-ada7-e757ac992488"). InnerVolumeSpecName "ingress-nginx-token-2jtnp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.572436    2632 reconciler.go:319] Volume detached for volume "ingress-nginx-token-2jtnp" (UniqueName: "kubernetes.io/secret/5ec77258-32fd-4b56-ada7-e757ac992488-ingress-nginx-token-2jtnp") on node "ingress-addon-legacy-830000" DevicePath ""
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.572550    2632 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5ec77258-32fd-4b56-ada7-e757ac992488-webhook-cert") on node "ingress-addon-legacy-830000" DevicePath ""
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: I1004 00:29:33.707310    2632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 219ca321a595c36388139af0829502ca69c3b58ea8ec1d7d5acaabddb76f51ae
	Oct 04 00:29:33 ingress-addon-legacy-830000 kubelet[2632]: E1004 00:29:33.709119    2632 pod_workers.go:191] Error syncing pod bd49be79-c5c6-4a86-a44c-c5f3c73b6e13 ("hello-world-app-5f5d8b66bb-5kfwl_default(bd49be79-c5c6-4a86-a44c-c5f3c73b6e13)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-5kfwl_default(bd49be79-c5c6-4a86-a44c-c5f3c73b6e13)"
	Oct 04 00:29:34 ingress-addon-legacy-830000 kubelet[2632]: W1004 00:29:34.729785    2632 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/5ec77258-32fd-4b56-ada7-e757ac992488/volumes" does not exist
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-830000 -n ingress-addon-legacy-830000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-830000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (52.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-226000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-226000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.3117685s)

                                                
                                                
-- stdout --
	* [mount-start-1-226000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-226000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-226000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-226000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-226000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-226000 -n mount-start-1-226000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-226000 -n mount-start-1-226000: exit status 7 (71.714667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-226000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-609000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-609000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.700682667s)

                                                
                                                
-- stdout --
	* [multinode-609000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-609000 in cluster multinode-609000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-609000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:31:48.358040    3308 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:31:48.358181    3308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:31:48.358183    3308 out.go:309] Setting ErrFile to fd 2...
	I1003 17:31:48.358186    3308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:31:48.358324    3308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:31:48.359404    3308 out.go:303] Setting JSON to false
	I1003 17:31:48.375571    3308 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1882,"bootTime":1696377626,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:31:48.375652    3308 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:31:48.379926    3308 out.go:177] * [multinode-609000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:31:48.386879    3308 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:31:48.390857    3308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:31:48.386932    3308 notify.go:220] Checking for updates...
	I1003 17:31:48.393879    3308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:31:48.396800    3308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:31:48.399815    3308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:31:48.402852    3308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:31:48.405870    3308 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:31:48.409786    3308 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:31:48.415784    3308 start.go:298] selected driver: qemu2
	I1003 17:31:48.415790    3308 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:31:48.415795    3308 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:31:48.418015    3308 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:31:48.420883    3308 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:31:48.423945    3308 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:31:48.423969    3308 cni.go:84] Creating CNI manager for ""
	I1003 17:31:48.423972    3308 cni.go:136] 0 nodes found, recommending kindnet
	I1003 17:31:48.423976    3308 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 17:31:48.423981    3308 start_flags.go:321] config:
	{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I1003 17:31:48.428617    3308 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:31:48.435791    3308 out.go:177] * Starting control plane node multinode-609000 in cluster multinode-609000
	I1003 17:31:48.439914    3308 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:31:48.439938    3308 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:31:48.439950    3308 cache.go:57] Caching tarball of preloaded images
	I1003 17:31:48.440012    3308 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:31:48.440018    3308 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:31:48.440202    3308 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/multinode-609000/config.json ...
	I1003 17:31:48.440213    3308 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/multinode-609000/config.json: {Name:mk66f79a6f65d5da560b2064a5e52f9f463e7bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:31:48.440420    3308 start.go:365] acquiring machines lock for multinode-609000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:31:48.440450    3308 start.go:369] acquired machines lock for "multinode-609000" in 24.459µs
	I1003 17:31:48.440461    3308 start.go:93] Provisioning new machine with config: &{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:31:48.440492    3308 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:31:48.448816    3308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:31:48.465156    3308 start.go:159] libmachine.API.Create for "multinode-609000" (driver="qemu2")
	I1003 17:31:48.465179    3308 client.go:168] LocalClient.Create starting
	I1003 17:31:48.465234    3308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:31:48.465263    3308 main.go:141] libmachine: Decoding PEM data...
	I1003 17:31:48.465273    3308 main.go:141] libmachine: Parsing certificate...
	I1003 17:31:48.465307    3308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:31:48.465327    3308 main.go:141] libmachine: Decoding PEM data...
	I1003 17:31:48.465335    3308 main.go:141] libmachine: Parsing certificate...
	I1003 17:31:48.465677    3308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:31:48.575581    3308 main.go:141] libmachine: Creating SSH key...
	I1003 17:31:48.636133    3308 main.go:141] libmachine: Creating Disk image...
	I1003 17:31:48.636138    3308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:31:48.636313    3308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:31:48.645122    3308 main.go:141] libmachine: STDOUT: 
	I1003 17:31:48.645137    3308 main.go:141] libmachine: STDERR: 
	I1003 17:31:48.645183    3308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2 +20000M
	I1003 17:31:48.653904    3308 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:31:48.653923    3308 main.go:141] libmachine: STDERR: 
	I1003 17:31:48.653937    3308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:31:48.653944    3308 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:31:48.653997    3308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:63:12:f2:f2:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:31:48.655732    3308 main.go:141] libmachine: STDOUT: 
	I1003 17:31:48.655744    3308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:31:48.655762    3308 client.go:171] LocalClient.Create took 190.583083ms
	I1003 17:31:50.657939    3308 start.go:128] duration metric: createHost completed in 2.217455375s
	I1003 17:31:50.658037    3308 start.go:83] releasing machines lock for "multinode-609000", held for 2.217621917s
	W1003 17:31:50.658092    3308 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:31:50.669345    3308 out.go:177] * Deleting "multinode-609000" in qemu2 ...
	W1003 17:31:50.689480    3308 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:31:50.689511    3308 start.go:703] Will try again in 5 seconds ...
	I1003 17:31:55.691658    3308 start.go:365] acquiring machines lock for multinode-609000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:31:55.692085    3308 start.go:369] acquired machines lock for "multinode-609000" in 330.75µs
	I1003 17:31:55.692204    3308 start.go:93] Provisioning new machine with config: &{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:31:55.692502    3308 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:31:55.702173    3308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:31:55.751248    3308 start.go:159] libmachine.API.Create for "multinode-609000" (driver="qemu2")
	I1003 17:31:55.751301    3308 client.go:168] LocalClient.Create starting
	I1003 17:31:55.751423    3308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:31:55.751473    3308 main.go:141] libmachine: Decoding PEM data...
	I1003 17:31:55.751498    3308 main.go:141] libmachine: Parsing certificate...
	I1003 17:31:55.751565    3308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:31:55.751601    3308 main.go:141] libmachine: Decoding PEM data...
	I1003 17:31:55.751617    3308 main.go:141] libmachine: Parsing certificate...
	I1003 17:31:55.752203    3308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:31:55.875478    3308 main.go:141] libmachine: Creating SSH key...
	I1003 17:31:55.972996    3308 main.go:141] libmachine: Creating Disk image...
	I1003 17:31:55.973002    3308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:31:55.973164    3308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:31:55.982073    3308 main.go:141] libmachine: STDOUT: 
	I1003 17:31:55.982087    3308 main.go:141] libmachine: STDERR: 
	I1003 17:31:55.982142    3308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2 +20000M
	I1003 17:31:55.989585    3308 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:31:55.989606    3308 main.go:141] libmachine: STDERR: 
	I1003 17:31:55.989619    3308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:31:55.989623    3308 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:31:55.989657    3308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6e:5c:0e:b4:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:31:55.991335    3308 main.go:141] libmachine: STDOUT: 
	I1003 17:31:55.991348    3308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:31:55.991371    3308 client.go:171] LocalClient.Create took 240.059667ms
	I1003 17:31:57.993572    3308 start.go:128] duration metric: createHost completed in 2.301070916s
	I1003 17:31:57.993663    3308 start.go:83] releasing machines lock for "multinode-609000", held for 2.3016005s
	W1003 17:31:57.994187    3308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-609000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-609000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:31:58.002318    3308 out.go:177] 
	W1003 17:31:58.007446    3308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:31:58.007501    3308 out.go:239] * 
	* 
	W1003 17:31:58.010063    3308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:31:58.018365    3308 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-609000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (65.396542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (90.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (111.879667ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-609000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- rollout status deployment/busybox: exit status 1 (54.037833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.118667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.977709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1003 17:31:59.484146    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.918459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.640667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.998625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.752875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.78ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.428708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.822584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1003 17:33:21.405126    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.951666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.86125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.871833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.983ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.593917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.940375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (90.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-609000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.913416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.99225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-609000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-609000 -v 3 --alsologtostderr: exit status 89 (39.303208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-609000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:28.770351    3411 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:28.770610    3411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:28.770613    3411 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:28.770616    3411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:28.770747    3411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:28.770972    3411 mustload.go:65] Loading cluster: multinode-609000
	I1003 17:33:28.771154    3411 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:28.775718    3411 out.go:177] * The control plane node must be running for this command
	I1003 17:33:28.778817    3411 out.go:177]   To start a cluster, run: "minikube start -p multinode-609000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-609000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-609000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-609000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-609000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.2\",\"ClusterName\":\"multinode-609000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.823042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status --output json --alsologtostderr: exit status 7 (28.203667ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-609000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:28.935896    3421 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:28.936045    3421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:28.936048    3421 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:28.936051    3421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:28.936180    3421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:28.936290    3421 out.go:303] Setting JSON to true
	I1003 17:33:28.936302    3421 mustload.go:65] Loading cluster: multinode-609000
	I1003 17:33:28.936362    3421 notify.go:220] Checking for updates...
	I1003 17:33:28.936503    3421 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:28.936509    3421 status.go:255] checking status of multinode-609000 ...
	I1003 17:33:28.936710    3421 status.go:330] multinode-609000 host status = "Stopped" (err=<nil>)
	I1003 17:33:28.936714    3421 status.go:343] host is not running, skipping remaining checks
	I1003 17:33:28.936719    3421 status.go:257] multinode-609000 status: &{Name:multinode-609000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-609000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.5605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 node stop m03: exit status 85 (44.059084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-609000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status: exit status 7 (28.848291ms)

                                                
                                                
-- stdout --
	multinode-609000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr: exit status 7 (28.616542ms)

                                                
                                                
-- stdout --
	multinode-609000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:29.066796    3431 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:29.067000    3431 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:29.067003    3431 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:29.067005    3431 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:29.067129    3431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:29.067241    3431 out.go:303] Setting JSON to false
	I1003 17:33:29.067253    3431 mustload.go:65] Loading cluster: multinode-609000
	I1003 17:33:29.067319    3431 notify.go:220] Checking for updates...
	I1003 17:33:29.067472    3431 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:29.067477    3431 status.go:255] checking status of multinode-609000 ...
	I1003 17:33:29.067669    3431 status.go:330] multinode-609000 host status = "Stopped" (err=<nil>)
	I1003 17:33:29.067673    3431 status.go:343] host is not running, skipping remaining checks
	I1003 17:33:29.067675    3431 status.go:257] multinode-609000 status: &{Name:multinode-609000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr": multinode-609000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.643417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 node start m03 --alsologtostderr: exit status 85 (46.450875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:29.124169    3435 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:29.124403    3435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:29.124406    3435 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:29.124408    3435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:29.124527    3435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:29.124765    3435 mustload.go:65] Loading cluster: multinode-609000
	I1003 17:33:29.124955    3435 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:29.128993    3435 out.go:177] 
	W1003 17:33:29.133079    3435 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1003 17:33:29.133085    3435 out.go:239] * 
	* 
	W1003 17:33:29.134555    3435 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:33:29.139068    3435 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1003 17:33:29.124169    3435 out.go:296] Setting OutFile to fd 1 ...
I1003 17:33:29.124403    3435 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:33:29.124406    3435 out.go:309] Setting ErrFile to fd 2...
I1003 17:33:29.124408    3435 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:33:29.124527    3435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:33:29.124765    3435 mustload.go:65] Loading cluster: multinode-609000
I1003 17:33:29.124955    3435 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:33:29.128993    3435 out.go:177] 
W1003 17:33:29.133079    3435 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1003 17:33:29.133085    3435 out.go:239] * 
* 
W1003 17:33:29.134555    3435 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 17:33:29.139068    3435 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-609000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status: exit status 7 (29.080792ms)

                                                
                                                
-- stdout --
	multinode-609000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-609000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.888083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-609000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-609000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-609000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-609000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.180884875s)

                                                
                                                
-- stdout --
	* [multinode-609000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-609000 in cluster multinode-609000
	* Restarting existing qemu2 VM for "multinode-609000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-609000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:29.317939    3445 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:29.318103    3445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:29.318106    3445 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:29.318109    3445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:29.318259    3445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:29.319208    3445 out.go:303] Setting JSON to false
	I1003 17:33:29.335356    3445 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1983,"bootTime":1696377626,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:33:29.335436    3445 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:33:29.340092    3445 out.go:177] * [multinode-609000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:33:29.347044    3445 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:33:29.351060    3445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:33:29.347105    3445 notify.go:220] Checking for updates...
	I1003 17:33:29.356037    3445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:33:29.359044    3445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:33:29.362029    3445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:33:29.368977    3445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:33:29.373282    3445 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:29.373331    3445 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:33:29.378007    3445 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:33:29.383967    3445 start.go:298] selected driver: qemu2
	I1003 17:33:29.383974    3445 start.go:902] validating driver "qemu2" against &{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:33:29.384028    3445 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:33:29.386397    3445 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:33:29.386420    3445 cni.go:84] Creating CNI manager for ""
	I1003 17:33:29.386425    3445 cni.go:136] 1 nodes found, recommending kindnet
	I1003 17:33:29.386430    3445 start_flags.go:321] config:
	{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:33:29.390637    3445 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:33:29.399014    3445 out.go:177] * Starting control plane node multinode-609000 in cluster multinode-609000
	I1003 17:33:29.402986    3445 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:33:29.403003    3445 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:33:29.403018    3445 cache.go:57] Caching tarball of preloaded images
	I1003 17:33:29.403089    3445 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:33:29.403095    3445 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:33:29.403165    3445 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/multinode-609000/config.json ...
	I1003 17:33:29.403568    3445 start.go:365] acquiring machines lock for multinode-609000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:33:29.403598    3445 start.go:369] acquired machines lock for "multinode-609000" in 23.375µs
	I1003 17:33:29.403606    3445 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:33:29.403612    3445 fix.go:54] fixHost starting: 
	I1003 17:33:29.403722    3445 fix.go:102] recreateIfNeeded on multinode-609000: state=Stopped err=<nil>
	W1003 17:33:29.403730    3445 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:33:29.411977    3445 out.go:177] * Restarting existing qemu2 VM for "multinode-609000" ...
	I1003 17:33:29.415093    3445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6e:5c:0e:b4:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:33:29.417165    3445 main.go:141] libmachine: STDOUT: 
	I1003 17:33:29.417180    3445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:33:29.417206    3445 fix.go:56] fixHost completed within 13.595291ms
	I1003 17:33:29.417212    3445 start.go:83] releasing machines lock for "multinode-609000", held for 13.609708ms
	W1003 17:33:29.417217    3445 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:33:29.417257    3445 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:33:29.417262    3445 start.go:703] Will try again in 5 seconds ...
	I1003 17:33:34.419409    3445 start.go:365] acquiring machines lock for multinode-609000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:33:34.419705    3445 start.go:369] acquired machines lock for "multinode-609000" in 211.416µs
	I1003 17:33:34.419831    3445 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:33:34.419853    3445 fix.go:54] fixHost starting: 
	I1003 17:33:34.420418    3445 fix.go:102] recreateIfNeeded on multinode-609000: state=Stopped err=<nil>
	W1003 17:33:34.420440    3445 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:33:34.424912    3445 out.go:177] * Restarting existing qemu2 VM for "multinode-609000" ...
	I1003 17:33:34.433147    3445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6e:5c:0e:b4:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:33:34.438504    3445 main.go:141] libmachine: STDOUT: 
	I1003 17:33:34.438549    3445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:33:34.438612    3445 fix.go:56] fixHost completed within 18.763792ms
	I1003 17:33:34.438627    3445 start.go:83] releasing machines lock for "multinode-609000", held for 18.902834ms
	W1003 17:33:34.438774    3445 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-609000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-609000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:33:34.444775    3445 out.go:177] 
	W1003 17:33:34.448828    3445 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:33:34.448875    3445 out.go:239] * 
	* 
	W1003 17:33:34.450161    3445 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:33:34.458821    3445 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-609000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-609000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (32.132208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 node delete m03: exit status 89 (39.112625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-609000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-609000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr: exit status 7 (28.658916ms)

                                                
                                                
-- stdout --
	multinode-609000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:34.638061    3469 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:34.638251    3469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:34.638254    3469 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:34.638256    3469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:34.638402    3469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:34.638515    3469 out.go:303] Setting JSON to false
	I1003 17:33:34.638527    3469 mustload.go:65] Loading cluster: multinode-609000
	I1003 17:33:34.638591    3469 notify.go:220] Checking for updates...
	I1003 17:33:34.638752    3469 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:34.638758    3469 status.go:255] checking status of multinode-609000 ...
	I1003 17:33:34.638939    3469 status.go:330] multinode-609000 host status = "Stopped" (err=<nil>)
	I1003 17:33:34.638942    3469 status.go:343] host is not running, skipping remaining checks
	I1003 17:33:34.638944    3469 status.go:257] multinode-609000 status: &{Name:multinode-609000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (29.169917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status: exit status 7 (29.283959ms)

                                                
                                                
-- stdout --
	multinode-609000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr: exit status 7 (28.293416ms)

                                                
                                                
-- stdout --
	multinode-609000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:34.786164    3477 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:34.786353    3477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:34.786357    3477 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:34.786359    3477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:34.786489    3477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:34.786606    3477 out.go:303] Setting JSON to false
	I1003 17:33:34.786622    3477 mustload.go:65] Loading cluster: multinode-609000
	I1003 17:33:34.786687    3477 notify.go:220] Checking for updates...
	I1003 17:33:34.786828    3477 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:34.786833    3477 status.go:255] checking status of multinode-609000 ...
	I1003 17:33:34.787025    3477 status.go:330] multinode-609000 host status = "Stopped" (err=<nil>)
	I1003 17:33:34.787029    3477 status.go:343] host is not running, skipping remaining checks
	I1003 17:33:34.787031    3477 status.go:257] multinode-609000 status: &{Name:multinode-609000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr": multinode-609000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-609000 status --alsologtostderr": multinode-609000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (28.813417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-609000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-609000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176555917s)

                                                
                                                
-- stdout --
	* [multinode-609000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-609000 in cluster multinode-609000
	* Restarting existing qemu2 VM for "multinode-609000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-609000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:33:34.843246    3481 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:33:34.843389    3481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:34.843392    3481 out.go:309] Setting ErrFile to fd 2...
	I1003 17:33:34.843394    3481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:33:34.843518    3481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:33:34.844553    3481 out.go:303] Setting JSON to false
	I1003 17:33:34.860844    3481 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1988,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:33:34.860911    3481 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:33:34.866191    3481 out.go:177] * [multinode-609000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:33:34.872143    3481 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:33:34.872223    3481 notify.go:220] Checking for updates...
	I1003 17:33:34.876232    3481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:33:34.879240    3481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:33:34.882154    3481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:33:34.885192    3481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:33:34.888238    3481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:33:34.889975    3481 config.go:182] Loaded profile config "multinode-609000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:33:34.890240    3481 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:33:34.894171    3481 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:33:34.900987    3481 start.go:298] selected driver: qemu2
	I1003 17:33:34.900994    3481 start.go:902] validating driver "qemu2" against &{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:33:34.901045    3481 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:33:34.903248    3481 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:33:34.903272    3481 cni.go:84] Creating CNI manager for ""
	I1003 17:33:34.903276    3481 cni.go:136] 1 nodes found, recommending kindnet
	I1003 17:33:34.903281    3481 start_flags.go:321] config:
	{Name:multinode-609000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-609000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:33:34.907498    3481 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:33:34.914195    3481 out.go:177] * Starting control plane node multinode-609000 in cluster multinode-609000
	I1003 17:33:34.918103    3481 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:33:34.918115    3481 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:33:34.918124    3481 cache.go:57] Caching tarball of preloaded images
	I1003 17:33:34.918170    3481 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:33:34.918177    3481 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:33:34.918225    3481 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/multinode-609000/config.json ...
	I1003 17:33:34.918622    3481 start.go:365] acquiring machines lock for multinode-609000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:33:34.918648    3481 start.go:369] acquired machines lock for "multinode-609000" in 20.542µs
	I1003 17:33:34.918656    3481 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:33:34.918662    3481 fix.go:54] fixHost starting: 
	I1003 17:33:34.918780    3481 fix.go:102] recreateIfNeeded on multinode-609000: state=Stopped err=<nil>
	W1003 17:33:34.918789    3481 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:33:34.927221    3481 out.go:177] * Restarting existing qemu2 VM for "multinode-609000" ...
	I1003 17:33:34.931172    3481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6e:5c:0e:b4:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:33:34.933118    3481 main.go:141] libmachine: STDOUT: 
	I1003 17:33:34.933134    3481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:33:34.933166    3481 fix.go:56] fixHost completed within 14.50375ms
	I1003 17:33:34.933172    3481 start.go:83] releasing machines lock for "multinode-609000", held for 14.519791ms
	W1003 17:33:34.933176    3481 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:33:34.933207    3481 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:33:34.933211    3481 start.go:703] Will try again in 5 seconds ...
	I1003 17:33:39.934864    3481 start.go:365] acquiring machines lock for multinode-609000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:33:39.935208    3481 start.go:369] acquired machines lock for "multinode-609000" in 249.5µs
	I1003 17:33:39.935362    3481 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:33:39.935384    3481 fix.go:54] fixHost starting: 
	I1003 17:33:39.936137    3481 fix.go:102] recreateIfNeeded on multinode-609000: state=Stopped err=<nil>
	W1003 17:33:39.936163    3481 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:33:39.941198    3481 out.go:177] * Restarting existing qemu2 VM for "multinode-609000" ...
	I1003 17:33:39.949828    3481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6e:5c:0e:b4:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/multinode-609000/disk.qcow2
	I1003 17:33:39.959428    3481 main.go:141] libmachine: STDOUT: 
	I1003 17:33:39.959482    3481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:33:39.959576    3481 fix.go:56] fixHost completed within 24.195125ms
	I1003 17:33:39.959597    3481 start.go:83] releasing machines lock for "multinode-609000", held for 24.36625ms
	W1003 17:33:39.959803    3481 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-609000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-609000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:33:39.967240    3481 out.go:177] 
	W1003 17:33:39.971178    3481 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:33:39.971270    3481 out.go:239] * 
	* 
	W1003 17:33:39.973948    3481 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:33:39.982155    3481 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-609000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (70.225333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-609000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-609000-m01 --driver=qemu2 
E1003 17:33:43.787699    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:43.794172    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:43.806320    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:43.828524    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:43.870739    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:43.952982    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:44.115411    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:44.437684    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:45.080134    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:46.362456    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
E1003 17:33:48.924873    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-609000-m01 --driver=qemu2 : exit status 80 (9.747460042s)

                                                
                                                
-- stdout --
	* [multinode-609000-m01] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-609000-m01 in cluster multinode-609000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-609000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-609000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-609000-m02 --driver=qemu2 
E1003 17:33:54.047443    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-609000-m02 --driver=qemu2 : exit status 80 (10.111748458s)

                                                
                                                
-- stdout --
	* [multinode-609000-m02] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-609000-m02 in cluster multinode-609000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-609000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-609000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-609000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-609000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-609000: exit status 89 (76.280167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-609000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-609000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-609000 -n multinode-609000: exit status 7 (29.273083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-609000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.10s)

                                                
                                    
x
+
TestPreload (9.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-072000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E1003 17:34:04.289070    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-072000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.79566225s)

                                                
                                                
-- stdout --
	* [test-preload-072000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-072000 in cluster test-preload-072000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-072000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:34:00.320323    3540 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:34:00.320480    3540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:00.320483    3540 out.go:309] Setting ErrFile to fd 2...
	I1003 17:34:00.320486    3540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:34:00.320603    3540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:34:00.321576    3540 out.go:303] Setting JSON to false
	I1003 17:34:00.337875    3540 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2014,"bootTime":1696377626,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:34:00.337972    3540 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:34:00.347137    3540 out.go:177] * [test-preload-072000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:34:00.351261    3540 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:34:00.351322    3540 notify.go:220] Checking for updates...
	I1003 17:34:00.358122    3540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:34:00.361241    3540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:34:00.364102    3540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:34:00.367208    3540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:34:00.370241    3540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:34:00.373448    3540 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:34:00.373493    3540 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:34:00.377093    3540 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:34:00.383092    3540 start.go:298] selected driver: qemu2
	I1003 17:34:00.383098    3540 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:34:00.383104    3540 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:34:00.385436    3540 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:34:00.388184    3540 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:34:00.391247    3540 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:34:00.391266    3540 cni.go:84] Creating CNI manager for ""
	I1003 17:34:00.391275    3540 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:34:00.391281    3540 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:34:00.391285    3540 start_flags.go:321] config:
	{Name:test-preload-072000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-072000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:34:00.395866    3540 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.403117    3540 out.go:177] * Starting control plane node test-preload-072000 in cluster test-preload-072000
	I1003 17:34:00.407231    3540 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1003 17:34:00.407321    3540 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/test-preload-072000/config.json ...
	I1003 17:34:00.407340    3540 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/test-preload-072000/config.json: {Name:mk15d0392e58ffebaf9c73a4e12e96311add2f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:34:00.407350    3540 cache.go:107] acquiring lock: {Name:mke6761129c092d51429699589b3a5edc55054c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407361    3540 cache.go:107] acquiring lock: {Name:mka1cadac3ebecf1c9f0651f202b5f351e41005c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407366    3540 cache.go:107] acquiring lock: {Name:mkfa0418fe9274a30c613d7632eea00947106693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407522    3540 cache.go:107] acquiring lock: {Name:mke891eb3e593db5612786d42986a1d138fb47e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407500    3540 cache.go:107] acquiring lock: {Name:mkda2e6d7b186153f685db162959fbcde7e1853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407546    3540 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1003 17:34:00.407606    3540 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:34:00.407632    3540 cache.go:107] acquiring lock: {Name:mk3f44fca9ddd7a806f914d37e34fab2d5135230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407669    3540 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1003 17:34:00.407727    3540 start.go:365] acquiring machines lock for test-preload-072000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:00.407727    3540 cache.go:107] acquiring lock: {Name:mk6bc143a87e5359a3e46b3b909b45be1a95ab8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407734    3540 cache.go:107] acquiring lock: {Name:mk8a9dc4fa57bfef0bcad3e6c38c8d36844efbb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:34:00.407761    3540 start.go:369] acquired machines lock for "test-preload-072000" in 28.167µs
	I1003 17:34:00.407763    3540 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1003 17:34:00.407779    3540 start.go:93] Provisioning new machine with config: &{Name:test-preload-072000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-072000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:00.407823    3540 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:00.407841    3540 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1003 17:34:00.411197    3540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:34:00.407857    3540 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1003 17:34:00.407888    3540 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 17:34:00.408007    3540 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1003 17:34:00.415790    3540 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1003 17:34:00.415841    3540 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1003 17:34:00.415888    3540 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1003 17:34:00.415898    3540 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1003 17:34:00.416139    3540 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:34:00.419144    3540 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1003 17:34:00.419215    3540 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 17:34:00.419269    3540 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1003 17:34:00.427751    3540 start.go:159] libmachine.API.Create for "test-preload-072000" (driver="qemu2")
	I1003 17:34:00.427770    3540 client.go:168] LocalClient.Create starting
	I1003 17:34:00.427838    3540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:00.427867    3540 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:00.427882    3540 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:00.427924    3540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:00.427943    3540 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:00.427952    3540 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:00.428289    3540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:00.546332    3540 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:00.721960    3540 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:00.721975    3540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:00.722153    3540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2
	I1003 17:34:00.731189    3540 main.go:141] libmachine: STDOUT: 
	I1003 17:34:00.731204    3540 main.go:141] libmachine: STDERR: 
	I1003 17:34:00.731256    3540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2 +20000M
	I1003 17:34:00.739397    3540 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:00.739412    3540 main.go:141] libmachine: STDERR: 
	I1003 17:34:00.739435    3540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2
	I1003 17:34:00.739443    3540 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:00.739482    3540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:28:64:0f:da:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2
	I1003 17:34:00.741271    3540 main.go:141] libmachine: STDOUT: 
	I1003 17:34:00.741285    3540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:00.741303    3540 client.go:171] LocalClient.Create took 313.534ms
	I1003 17:34:01.074257    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1003 17:34:01.120968    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1003 17:34:01.397483    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1003 17:34:01.619117    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1003 17:34:02.011970    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1003 17:34:02.142425    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1003 17:34:02.142441    3540 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.735001542s
	I1003 17:34:02.142451    3540 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1003 17:34:02.183254    3540 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 17:34:02.183281    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W1003 17:34:02.225867    3540 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1003 17:34:02.225954    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1003 17:34:02.391300    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 17:34:02.391321    3540 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.984000291s
	I1003 17:34:02.391335    3540 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 17:34:02.526556    3540 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1003 17:34:02.742352    3540 start.go:128] duration metric: createHost completed in 2.334556708s
	I1003 17:34:02.742407    3540 start.go:83] releasing machines lock for "test-preload-072000", held for 2.334683209s
	W1003 17:34:02.742466    3540 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:02.756394    3540 out.go:177] * Deleting "test-preload-072000" in qemu2 ...
	W1003 17:34:02.776313    3540 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:02.776343    3540 start.go:703] Will try again in 5 seconds ...
	I1003 17:34:03.487703    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1003 17:34:03.487752    3540 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.080082666s
	I1003 17:34:03.487813    3540 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1003 17:34:03.898495    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1003 17:34:03.898546    3540 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.490974875s
	I1003 17:34:03.898574    3540 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1003 17:34:06.397502    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1003 17:34:06.397581    3540 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.990088125s
	I1003 17:34:06.397620    3540 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1003 17:34:06.652405    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1003 17:34:06.652476    3540 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.24525725s
	I1003 17:34:06.652514    3540 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1003 17:34:06.825625    3540 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1003 17:34:06.825669    3540 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.418452667s
	I1003 17:34:06.825702    3540 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1003 17:34:07.776467    3540 start.go:365] acquiring machines lock for test-preload-072000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:34:07.776873    3540 start.go:369] acquired machines lock for "test-preload-072000" in 332.333µs
	I1003 17:34:07.776992    3540 start.go:93] Provisioning new machine with config: &{Name:test-preload-072000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-072000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:34:07.777250    3540 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:34:07.784811    3540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:34:07.833345    3540 start.go:159] libmachine.API.Create for "test-preload-072000" (driver="qemu2")
	I1003 17:34:07.833379    3540 client.go:168] LocalClient.Create starting
	I1003 17:34:07.833533    3540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:34:07.833600    3540 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:07.833624    3540 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:07.833688    3540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:34:07.833726    3540 main.go:141] libmachine: Decoding PEM data...
	I1003 17:34:07.833742    3540 main.go:141] libmachine: Parsing certificate...
	I1003 17:34:07.834243    3540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:34:07.958216    3540 main.go:141] libmachine: Creating SSH key...
	I1003 17:34:08.027926    3540 main.go:141] libmachine: Creating Disk image...
	I1003 17:34:08.027931    3540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:34:08.028086    3540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2
	I1003 17:34:08.037141    3540 main.go:141] libmachine: STDOUT: 
	I1003 17:34:08.037157    3540 main.go:141] libmachine: STDERR: 
	I1003 17:34:08.037215    3540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2 +20000M
	I1003 17:34:08.044892    3540 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:34:08.044904    3540 main.go:141] libmachine: STDERR: 
	I1003 17:34:08.044916    3540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2
	I1003 17:34:08.044921    3540 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:34:08.044958    3540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:78:9e:f7:02:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/test-preload-072000/disk.qcow2
	I1003 17:34:08.046721    3540 main.go:141] libmachine: STDOUT: 
	I1003 17:34:08.046733    3540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:34:08.046747    3540 client.go:171] LocalClient.Create took 213.368042ms
	I1003 17:34:10.048121    3540 start.go:128] duration metric: createHost completed in 2.270858333s
	I1003 17:34:10.048190    3540 start.go:83] releasing machines lock for "test-preload-072000", held for 2.271339084s
	W1003 17:34:10.048411    3540 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-072000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-072000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:34:10.059979    3540 out.go:177] 
	W1003 17:34:10.063967    3540 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:34:10.063995    3540 out.go:239] * 
	* 
	W1003 17:34:10.066662    3540 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:34:10.075926    3540 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-072000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-10-03 17:34:10.091126 -0700 PDT m=+1849.313229918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-072000 -n test-preload-072000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-072000 -n test-preload-072000: exit status 7 (64.481542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-072000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-072000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-072000
--- FAIL: TestPreload (9.96s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-546000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-546000 --memory=2048 --driver=qemu2 : exit status 80 (9.931641917s)

                                                
                                                
-- stdout --
	* [scheduled-stop-546000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-546000 in cluster scheduled-stop-546000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-546000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-546000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-546000 in cluster scheduled-stop-546000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-546000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-10-03 17:34:20.186691 -0700 PDT m=+1859.408991251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-546000 -n scheduled-stop-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-546000 -n scheduled-stop-546000: exit status 7 (66.955833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-546000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-546000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-546000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (12.18s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2620353705 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-562000 --memory=2600 --driver=qemu2 
E1003 17:34:24.771412    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-562000 --memory=2600 --driver=qemu2 : exit status 80 (9.854703625s)

                                                
                                                
-- stdout --
	* [skaffold-562000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-562000 in cluster skaffold-562000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-562000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-562000 in cluster skaffold-562000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-10-03 17:34:32.367397 -0700 PDT m=+1871.589922834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-562000 -n skaffold-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-562000 -n skaffold-562000: exit status 7 (61.518125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-562000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-562000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-562000
--- FAIL: TestSkaffold (12.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1003 17:35:24.574072    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:35:37.535960    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:36:05.242722    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:36:27.652088    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-03 17:38:05.330897 -0700 PDT m=+2084.557575126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-969000 -n running-upgrade-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-969000 -n running-upgrade-969000: exit status 85 (87.05875ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-969000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-969000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-969000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-969000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-969000\"")
helpers_test.go:175: Cleaning up "running-upgrade-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-969000
--- FAIL: TestRunningBinaryUpgrade (172.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.684050334s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-476000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-476000 in cluster kubernetes-upgrade-476000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-476000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:38:05.692692    4110 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:38:05.692844    4110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:38:05.692848    4110 out.go:309] Setting ErrFile to fd 2...
	I1003 17:38:05.692850    4110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:38:05.692988    4110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:38:05.694001    4110 out.go:303] Setting JSON to false
	I1003 17:38:05.710206    4110 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2259,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:38:05.710296    4110 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:38:05.715551    4110 out.go:177] * [kubernetes-upgrade-476000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:38:05.722701    4110 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:38:05.725548    4110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:38:05.722779    4110 notify.go:220] Checking for updates...
	I1003 17:38:05.731669    4110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:38:05.733018    4110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:38:05.735654    4110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:38:05.738656    4110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:38:05.742124    4110 config.go:182] Loaded profile config "cert-expiration-876000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:38:05.742185    4110 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:38:05.742221    4110 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:38:05.746618    4110 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:38:05.753642    4110 start.go:298] selected driver: qemu2
	I1003 17:38:05.753649    4110 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:38:05.753654    4110 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:38:05.756097    4110 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:38:05.758622    4110 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:38:05.761805    4110 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:38:05.761831    4110 cni.go:84] Creating CNI manager for ""
	I1003 17:38:05.761838    4110 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:38:05.761842    4110 start_flags.go:321] config:
	{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:38:05.766236    4110 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:38:05.773653    4110 out.go:177] * Starting control plane node kubernetes-upgrade-476000 in cluster kubernetes-upgrade-476000
	I1003 17:38:05.777633    4110 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:38:05.777650    4110 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:38:05.777660    4110 cache.go:57] Caching tarball of preloaded images
	I1003 17:38:05.777723    4110 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:38:05.777729    4110 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 17:38:05.777787    4110 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubernetes-upgrade-476000/config.json ...
	I1003 17:38:05.777798    4110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubernetes-upgrade-476000/config.json: {Name:mk8ee2207d916f7e1f58e55e703a21f2b4db9707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:38:05.778006    4110 start.go:365] acquiring machines lock for kubernetes-upgrade-476000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:38:05.778037    4110 start.go:369] acquired machines lock for "kubernetes-upgrade-476000" in 23.833µs
	I1003 17:38:05.778048    4110 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:38:05.778075    4110 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:38:05.786689    4110 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:38:05.802797    4110 start.go:159] libmachine.API.Create for "kubernetes-upgrade-476000" (driver="qemu2")
	I1003 17:38:05.802827    4110 client.go:168] LocalClient.Create starting
	I1003 17:38:05.802888    4110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:38:05.802914    4110 main.go:141] libmachine: Decoding PEM data...
	I1003 17:38:05.802924    4110 main.go:141] libmachine: Parsing certificate...
	I1003 17:38:05.802958    4110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:38:05.802979    4110 main.go:141] libmachine: Decoding PEM data...
	I1003 17:38:05.802987    4110 main.go:141] libmachine: Parsing certificate...
	I1003 17:38:05.803309    4110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:38:05.914539    4110 main.go:141] libmachine: Creating SSH key...
	I1003 17:38:05.945870    4110 main.go:141] libmachine: Creating Disk image...
	I1003 17:38:05.945876    4110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:38:05.946049    4110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:05.955122    4110 main.go:141] libmachine: STDOUT: 
	I1003 17:38:05.955141    4110 main.go:141] libmachine: STDERR: 
	I1003 17:38:05.955189    4110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2 +20000M
	I1003 17:38:05.962649    4110 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:38:05.962665    4110 main.go:141] libmachine: STDERR: 
	I1003 17:38:05.962681    4110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:05.962689    4110 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:38:05.962719    4110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:60:26:86:a1:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:05.964389    4110 main.go:141] libmachine: STDOUT: 
	I1003 17:38:05.964403    4110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:38:05.964421    4110 client.go:171] LocalClient.Create took 161.59225ms
	I1003 17:38:07.966503    4110 start.go:128] duration metric: createHost completed in 2.188459208s
	I1003 17:38:07.966524    4110 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 2.188524959s
	W1003 17:38:07.966539    4110 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:07.982404    4110 out.go:177] * Deleting "kubernetes-upgrade-476000" in qemu2 ...
	W1003 17:38:07.995690    4110 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:07.995698    4110 start.go:703] Will try again in 5 seconds ...
	I1003 17:38:12.997834    4110 start.go:365] acquiring machines lock for kubernetes-upgrade-476000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:38:13.016300    4110 start.go:369] acquired machines lock for "kubernetes-upgrade-476000" in 18.339542ms
	I1003 17:38:13.016388    4110 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:38:13.016600    4110 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:38:13.027388    4110 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:38:13.075747    4110 start.go:159] libmachine.API.Create for "kubernetes-upgrade-476000" (driver="qemu2")
	I1003 17:38:13.075805    4110 client.go:168] LocalClient.Create starting
	I1003 17:38:13.075897    4110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:38:13.075953    4110 main.go:141] libmachine: Decoding PEM data...
	I1003 17:38:13.075973    4110 main.go:141] libmachine: Parsing certificate...
	I1003 17:38:13.076032    4110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:38:13.076078    4110 main.go:141] libmachine: Decoding PEM data...
	I1003 17:38:13.076089    4110 main.go:141] libmachine: Parsing certificate...
	I1003 17:38:13.076595    4110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:38:13.255128    4110 main.go:141] libmachine: Creating SSH key...
	I1003 17:38:13.288851    4110 main.go:141] libmachine: Creating Disk image...
	I1003 17:38:13.288857    4110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:38:13.289021    4110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:13.297882    4110 main.go:141] libmachine: STDOUT: 
	I1003 17:38:13.297900    4110 main.go:141] libmachine: STDERR: 
	I1003 17:38:13.297963    4110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2 +20000M
	I1003 17:38:13.305360    4110 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:38:13.305378    4110 main.go:141] libmachine: STDERR: 
	I1003 17:38:13.305393    4110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:13.305398    4110 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:38:13.305442    4110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dc:a8:66:fa:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:13.307019    4110 main.go:141] libmachine: STDOUT: 
	I1003 17:38:13.307035    4110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:38:13.307049    4110 client.go:171] LocalClient.Create took 231.243666ms
	I1003 17:38:15.309223    4110 start.go:128] duration metric: createHost completed in 2.292602334s
	I1003 17:38:15.309302    4110 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 2.293010875s
	W1003 17:38:15.309625    4110 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:15.319909    4110 out.go:177] 
	W1003 17:38:15.323522    4110 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:38:15.323567    4110 out.go:239] * 
	* 
	W1003 17:38:15.325930    4110 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:38:15.336358    4110 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-476000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-476000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-476000 status --format={{.Host}}: exit status 7 (36.398875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181094291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-476000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-476000 in cluster kubernetes-upgrade-476000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:38:15.513370    4145 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:38:15.513520    4145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:38:15.513526    4145 out.go:309] Setting ErrFile to fd 2...
	I1003 17:38:15.513529    4145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:38:15.513672    4145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:38:15.514673    4145 out.go:303] Setting JSON to false
	I1003 17:38:15.530809    4145 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2269,"bootTime":1696377626,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:38:15.530891    4145 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:38:15.535675    4145 out.go:177] * [kubernetes-upgrade-476000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:38:15.538664    4145 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:38:15.542609    4145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:38:15.538743    4145 notify.go:220] Checking for updates...
	I1003 17:38:15.549575    4145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:38:15.552637    4145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:38:15.555642    4145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:38:15.558556    4145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:38:15.561846    4145 config.go:182] Loaded profile config "kubernetes-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1003 17:38:15.562115    4145 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:38:15.566592    4145 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:38:15.573624    4145 start.go:298] selected driver: qemu2
	I1003 17:38:15.573637    4145 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:38:15.573697    4145 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:38:15.576045    4145 cni.go:84] Creating CNI manager for ""
	I1003 17:38:15.576061    4145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:38:15.576068    4145 start_flags.go:321] config:
	{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-476000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:38:15.580319    4145 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:38:15.587590    4145 out.go:177] * Starting control plane node kubernetes-upgrade-476000 in cluster kubernetes-upgrade-476000
	I1003 17:38:15.591664    4145 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:38:15.591679    4145 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:38:15.591697    4145 cache.go:57] Caching tarball of preloaded images
	I1003 17:38:15.591750    4145 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:38:15.591755    4145 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:38:15.591814    4145 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubernetes-upgrade-476000/config.json ...
	I1003 17:38:15.592210    4145 start.go:365] acquiring machines lock for kubernetes-upgrade-476000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:38:15.592239    4145 start.go:369] acquired machines lock for "kubernetes-upgrade-476000" in 22.959µs
	I1003 17:38:15.592247    4145 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:38:15.592254    4145 fix.go:54] fixHost starting: 
	I1003 17:38:15.592377    4145 fix.go:102] recreateIfNeeded on kubernetes-upgrade-476000: state=Stopped err=<nil>
	W1003 17:38:15.592385    4145 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:38:15.600613    4145 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	I1003 17:38:15.603806    4145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dc:a8:66:fa:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:15.605827    4145 main.go:141] libmachine: STDOUT: 
	I1003 17:38:15.605846    4145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:38:15.605872    4145 fix.go:56] fixHost completed within 13.620709ms
	I1003 17:38:15.605876    4145 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 13.633375ms
	W1003 17:38:15.605882    4145 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:38:15.605918    4145 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:15.605923    4145 start.go:703] Will try again in 5 seconds ...
	I1003 17:38:20.608085    4145 start.go:365] acquiring machines lock for kubernetes-upgrade-476000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:38:20.608537    4145 start.go:369] acquired machines lock for "kubernetes-upgrade-476000" in 338.042µs
	I1003 17:38:20.608687    4145 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:38:20.608708    4145 fix.go:54] fixHost starting: 
	I1003 17:38:20.609502    4145 fix.go:102] recreateIfNeeded on kubernetes-upgrade-476000: state=Stopped err=<nil>
	W1003 17:38:20.609531    4145 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:38:20.614682    4145 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	I1003 17:38:20.623855    4145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dc:a8:66:fa:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1003 17:38:20.633580    4145 main.go:141] libmachine: STDOUT: 
	I1003 17:38:20.633662    4145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:38:20.633791    4145 fix.go:56] fixHost completed within 25.079ms
	I1003 17:38:20.633816    4145 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 25.253625ms
	W1003 17:38:20.634102    4145 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:20.641548    4145 out.go:177] 
	W1003 17:38:20.645689    4145 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:38:20.645718    4145 out.go:239] * 
	* 
	W1003 17:38:20.648046    4145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:38:20.656671    4145 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-476000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-476000 version --output=json: exit status 1 (59.957333ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-476000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-10-03 17:38:20.72921 -0700 PDT m=+2099.956189418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-476000 -n kubernetes-upgrade-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-476000 -n kubernetes-upgrade-476000: exit status 7 (32.879041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-476000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-476000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-476000
--- FAIL: TestKubernetesUpgrade (15.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.34s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17345
- KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3807902298/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.58s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17345
- KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3477894060/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (139.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (139.54s)

                                                
                                    
x
+
TestPause/serial/Start (9.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-882000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-882000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.771638833s)

                                                
                                                
-- stdout --
	* [pause-882000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-882000 in cluster pause-882000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-882000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-882000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-882000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-882000 -n pause-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-882000 -n pause-882000: exit status 7 (66.812834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-483000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-483000 --driver=qemu2 : exit status 80 (9.670005792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-483000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-483000 in cluster NoKubernetes-483000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-483000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-483000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-483000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000: exit status 7 (65.135584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --driver=qemu2 
E1003 17:38:43.781925    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248079583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-483000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-483000
	* Restarting existing qemu2 VM for "NoKubernetes-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000: exit status 7 (67.204625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --driver=qemu2 : exit status 80 (5.25566325s)

                                                
                                                
-- stdout --
	* [NoKubernetes-483000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-483000
	* Restarting existing qemu2 VM for "NoKubernetes-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000: exit status 7 (70.866958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-483000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-483000 --driver=qemu2 : exit status 80 (5.2399015s)

                                                
                                                
-- stdout --
	* [NoKubernetes-483000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-483000
	* Restarting existing qemu2 VM for "NoKubernetes-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-483000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-483000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-483000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-483000 -n NoKubernetes-483000: exit status 7 (67.878542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-483000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.78164675s)

                                                
                                                
-- stdout --
	* [auto-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-991000 in cluster auto-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:38:57.001850    4266 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:38:57.001990    4266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:38:57.001993    4266 out.go:309] Setting ErrFile to fd 2...
	I1003 17:38:57.001996    4266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:38:57.002124    4266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:38:57.003148    4266 out.go:303] Setting JSON to false
	I1003 17:38:57.019066    4266 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2311,"bootTime":1696377626,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:38:57.019163    4266 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:38:57.023428    4266 out.go:177] * [auto-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:38:57.035080    4266 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:38:57.031277    4266 notify.go:220] Checking for updates...
	I1003 17:38:57.041220    4266 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:38:57.049160    4266 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:38:57.053209    4266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:38:57.054624    4266 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:38:57.058224    4266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:38:57.061569    4266 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:38:57.061629    4266 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:38:57.066029    4266 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:38:57.073212    4266 start.go:298] selected driver: qemu2
	I1003 17:38:57.073219    4266 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:38:57.073225    4266 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:38:57.075630    4266 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:38:57.079219    4266 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:38:57.083257    4266 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:38:57.083277    4266 cni.go:84] Creating CNI manager for ""
	I1003 17:38:57.083286    4266 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:38:57.083289    4266 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:38:57.083294    4266 start_flags.go:321] config:
	{Name:auto-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I1003 17:38:57.087775    4266 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:38:57.092003    4266 out.go:177] * Starting control plane node auto-991000 in cluster auto-991000
	I1003 17:38:57.100210    4266 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:38:57.100232    4266 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:38:57.100242    4266 cache.go:57] Caching tarball of preloaded images
	I1003 17:38:57.100313    4266 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:38:57.100319    4266 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:38:57.100403    4266 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/auto-991000/config.json ...
	I1003 17:38:57.100419    4266 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/auto-991000/config.json: {Name:mkf94c93eda6924bc1142c3d638676b79d828511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:38:57.100626    4266 start.go:365] acquiring machines lock for auto-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:38:57.100657    4266 start.go:369] acquired machines lock for "auto-991000" in 24.792µs
	I1003 17:38:57.100668    4266 start.go:93] Provisioning new machine with config: &{Name:auto-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:38:57.100698    4266 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:38:57.108177    4266 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:38:57.126220    4266 start.go:159] libmachine.API.Create for "auto-991000" (driver="qemu2")
	I1003 17:38:57.126249    4266 client.go:168] LocalClient.Create starting
	I1003 17:38:57.126310    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:38:57.126339    4266 main.go:141] libmachine: Decoding PEM data...
	I1003 17:38:57.126351    4266 main.go:141] libmachine: Parsing certificate...
	I1003 17:38:57.126390    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:38:57.126411    4266 main.go:141] libmachine: Decoding PEM data...
	I1003 17:38:57.126418    4266 main.go:141] libmachine: Parsing certificate...
	I1003 17:38:57.126768    4266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:38:57.236503    4266 main.go:141] libmachine: Creating SSH key...
	I1003 17:38:57.338830    4266 main.go:141] libmachine: Creating Disk image...
	I1003 17:38:57.338837    4266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:38:57.339009    4266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2
	I1003 17:38:57.347909    4266 main.go:141] libmachine: STDOUT: 
	I1003 17:38:57.347922    4266 main.go:141] libmachine: STDERR: 
	I1003 17:38:57.347974    4266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2 +20000M
	I1003 17:38:57.355363    4266 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:38:57.355389    4266 main.go:141] libmachine: STDERR: 
	I1003 17:38:57.355406    4266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2
	I1003 17:38:57.355413    4266 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:38:57.355442    4266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:1e:91:6e:d1:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2
	I1003 17:38:57.357103    4266 main.go:141] libmachine: STDOUT: 
	I1003 17:38:57.357115    4266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:38:57.357137    4266 client.go:171] LocalClient.Create took 230.888083ms
	I1003 17:38:59.359349    4266 start.go:128] duration metric: createHost completed in 2.258665834s
	I1003 17:38:59.359408    4266 start.go:83] releasing machines lock for "auto-991000", held for 2.258782625s
	W1003 17:38:59.359461    4266 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:59.368819    4266 out.go:177] * Deleting "auto-991000" in qemu2 ...
	W1003 17:38:59.394371    4266 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:38:59.394405    4266 start.go:703] Will try again in 5 seconds ...
	I1003 17:39:04.396589    4266 start.go:365] acquiring machines lock for auto-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:04.396994    4266 start.go:369] acquired machines lock for "auto-991000" in 295.583µs
	I1003 17:39:04.397117    4266 start.go:93] Provisioning new machine with config: &{Name:auto-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:04.397373    4266 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:04.407042    4266 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:04.454180    4266 start.go:159] libmachine.API.Create for "auto-991000" (driver="qemu2")
	I1003 17:39:04.454262    4266 client.go:168] LocalClient.Create starting
	I1003 17:39:04.454376    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:04.454436    4266 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:04.454453    4266 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:04.454514    4266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:04.454549    4266 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:04.454567    4266 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:04.455068    4266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:04.577109    4266 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:04.696292    4266 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:04.696297    4266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:04.696456    4266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2
	I1003 17:39:04.705312    4266 main.go:141] libmachine: STDOUT: 
	I1003 17:39:04.705330    4266 main.go:141] libmachine: STDERR: 
	I1003 17:39:04.705381    4266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2 +20000M
	I1003 17:39:04.713015    4266 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:04.713027    4266 main.go:141] libmachine: STDERR: 
	I1003 17:39:04.713045    4266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2
	I1003 17:39:04.713049    4266 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:04.713086    4266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3c:45:02:e2:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/auto-991000/disk.qcow2
	I1003 17:39:04.714743    4266 main.go:141] libmachine: STDOUT: 
	I1003 17:39:04.714757    4266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:04.714769    4266 client.go:171] LocalClient.Create took 260.504833ms
	I1003 17:39:06.716953    4266 start.go:128] duration metric: createHost completed in 2.319583083s
	I1003 17:39:06.717047    4266 start.go:83] releasing machines lock for "auto-991000", held for 2.320075042s
	W1003 17:39:06.717514    4266 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:06.725240    4266 out.go:177] 
	W1003 17:39:06.730277    4266 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:39:06.730315    4266 out.go:239] * 
	* 
	W1003 17:39:06.733107    4266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:39:06.742005    4266 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E1003 17:39:11.491546    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/ingress-addon-legacy-830000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.821587958s)

                                                
                                                
-- stdout --
	* [kindnet-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-991000 in cluster kindnet-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:39:08.867779    4385 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:39:08.867918    4385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:08.867921    4385 out.go:309] Setting ErrFile to fd 2...
	I1003 17:39:08.867924    4385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:08.868048    4385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:39:08.869038    4385 out.go:303] Setting JSON to false
	I1003 17:39:08.885195    4385 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2322,"bootTime":1696377626,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:39:08.885282    4385 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:39:08.890778    4385 out.go:177] * [kindnet-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:39:08.898608    4385 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:39:08.902451    4385 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:39:08.898679    4385 notify.go:220] Checking for updates...
	I1003 17:39:08.908573    4385 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:39:08.911652    4385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:39:08.914681    4385 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:39:08.917626    4385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:39:08.921006    4385 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:39:08.921049    4385 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:39:08.925508    4385 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:39:08.932595    4385 start.go:298] selected driver: qemu2
	I1003 17:39:08.932603    4385 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:39:08.932610    4385 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:39:08.935011    4385 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:39:08.938557    4385 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:39:08.941686    4385 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:39:08.941708    4385 cni.go:84] Creating CNI manager for "kindnet"
	I1003 17:39:08.941713    4385 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 17:39:08.941719    4385 start_flags.go:321] config:
	{Name:kindnet-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:39:08.946259    4385 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:39:08.951582    4385 out.go:177] * Starting control plane node kindnet-991000 in cluster kindnet-991000
	I1003 17:39:08.955639    4385 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:39:08.955656    4385 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:39:08.955666    4385 cache.go:57] Caching tarball of preloaded images
	I1003 17:39:08.955730    4385 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:39:08.955735    4385 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:39:08.955801    4385 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kindnet-991000/config.json ...
	I1003 17:39:08.955813    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kindnet-991000/config.json: {Name:mk3a97089339c61a68697b2cf868572cac9934f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:39:08.956021    4385 start.go:365] acquiring machines lock for kindnet-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:08.956050    4385 start.go:369] acquired machines lock for "kindnet-991000" in 23.708µs
	I1003 17:39:08.956061    4385 start.go:93] Provisioning new machine with config: &{Name:kindnet-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:08.956088    4385 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:08.964656    4385 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:08.981445    4385 start.go:159] libmachine.API.Create for "kindnet-991000" (driver="qemu2")
	I1003 17:39:08.981472    4385 client.go:168] LocalClient.Create starting
	I1003 17:39:08.981527    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:08.981568    4385 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:08.981578    4385 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:08.981618    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:08.981636    4385 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:08.981643    4385 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:08.982016    4385 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:09.092467    4385 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:09.197829    4385 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:09.197837    4385 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:09.198022    4385 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2
	I1003 17:39:09.206719    4385 main.go:141] libmachine: STDOUT: 
	I1003 17:39:09.206735    4385 main.go:141] libmachine: STDERR: 
	I1003 17:39:09.206784    4385 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2 +20000M
	I1003 17:39:09.214296    4385 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:09.214316    4385 main.go:141] libmachine: STDERR: 
	I1003 17:39:09.214337    4385 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2
	I1003 17:39:09.214342    4385 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:09.214388    4385 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:08:dc:f1:39:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2
	I1003 17:39:09.216021    4385 main.go:141] libmachine: STDOUT: 
	I1003 17:39:09.216034    4385 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:09.216055    4385 client.go:171] LocalClient.Create took 234.582375ms
	I1003 17:39:11.218223    4385 start.go:128] duration metric: createHost completed in 2.262146791s
	I1003 17:39:11.218318    4385 start.go:83] releasing machines lock for "kindnet-991000", held for 2.262302292s
	W1003 17:39:11.218365    4385 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:11.230007    4385 out.go:177] * Deleting "kindnet-991000" in qemu2 ...
	W1003 17:39:11.252309    4385 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:11.252342    4385 start.go:703] Will try again in 5 seconds ...
	I1003 17:39:16.254544    4385 start.go:365] acquiring machines lock for kindnet-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:16.254973    4385 start.go:369] acquired machines lock for "kindnet-991000" in 322.792µs
	I1003 17:39:16.255118    4385 start.go:93] Provisioning new machine with config: &{Name:kindnet-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:16.255423    4385 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:16.267263    4385 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:16.315518    4385 start.go:159] libmachine.API.Create for "kindnet-991000" (driver="qemu2")
	I1003 17:39:16.315555    4385 client.go:168] LocalClient.Create starting
	I1003 17:39:16.315656    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:16.315703    4385 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:16.315719    4385 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:16.315781    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:16.315815    4385 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:16.315830    4385 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:16.316533    4385 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:16.445182    4385 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:16.600475    4385 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:16.600481    4385 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:16.600656    4385 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2
	I1003 17:39:16.609882    4385 main.go:141] libmachine: STDOUT: 
	I1003 17:39:16.609898    4385 main.go:141] libmachine: STDERR: 
	I1003 17:39:16.609952    4385 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2 +20000M
	I1003 17:39:16.617429    4385 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:16.617441    4385 main.go:141] libmachine: STDERR: 
	I1003 17:39:16.617455    4385 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2
	I1003 17:39:16.617465    4385 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:16.617529    4385 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:68:ab:16:7e:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kindnet-991000/disk.qcow2
	I1003 17:39:16.619204    4385 main.go:141] libmachine: STDOUT: 
	I1003 17:39:16.619218    4385 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:16.619230    4385 client.go:171] LocalClient.Create took 303.674084ms
	I1003 17:39:18.621379    4385 start.go:128] duration metric: createHost completed in 2.365974625s
	I1003 17:39:18.621436    4385 start.go:83] releasing machines lock for "kindnet-991000", held for 2.366483208s
	W1003 17:39:18.621862    4385 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:18.633464    4385 out.go:177] 
	W1003 17:39:18.637631    4385 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:39:18.637664    4385 out.go:239] * 
	* 
	W1003 17:39:18.640258    4385 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:39:18.649552    4385 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.783168375s)

                                                
                                                
-- stdout --
	* [calico-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-991000 in cluster calico-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:39:20.877477    4504 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:39:20.877618    4504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:20.877621    4504 out.go:309] Setting ErrFile to fd 2...
	I1003 17:39:20.877624    4504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:20.877764    4504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:39:20.878789    4504 out.go:303] Setting JSON to false
	I1003 17:39:20.894748    4504 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2334,"bootTime":1696377626,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:39:20.894845    4504 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:39:20.899266    4504 out.go:177] * [calico-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:39:20.907124    4504 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:39:20.907170    4504 notify.go:220] Checking for updates...
	I1003 17:39:20.911138    4504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:39:20.914114    4504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:39:20.917097    4504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:39:20.920129    4504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:39:20.923020    4504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:39:20.926431    4504 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:39:20.926477    4504 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:39:20.931035    4504 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:39:20.938052    4504 start.go:298] selected driver: qemu2
	I1003 17:39:20.938059    4504 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:39:20.938064    4504 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:39:20.940181    4504 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:39:20.943123    4504 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:39:20.946171    4504 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:39:20.946201    4504 cni.go:84] Creating CNI manager for "calico"
	I1003 17:39:20.946205    4504 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I1003 17:39:20.946213    4504 start_flags.go:321] config:
	{Name:calico-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:calico-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1003 17:39:20.950597    4504 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:39:20.958111    4504 out.go:177] * Starting control plane node calico-991000 in cluster calico-991000
	I1003 17:39:20.962072    4504 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:39:20.962085    4504 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:39:20.962095    4504 cache.go:57] Caching tarball of preloaded images
	I1003 17:39:20.962151    4504 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:39:20.962156    4504 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:39:20.962213    4504 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/calico-991000/config.json ...
	I1003 17:39:20.962224    4504 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/calico-991000/config.json: {Name:mk928e7f4a031848fab89475538e80bb1d236121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:39:20.962425    4504 start.go:365] acquiring machines lock for calico-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:20.962451    4504 start.go:369] acquired machines lock for "calico-991000" in 21.75µs
	I1003 17:39:20.962462    4504 start.go:93] Provisioning new machine with config: &{Name:calico-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:20.962491    4504 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:20.970002    4504 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:20.985457    4504 start.go:159] libmachine.API.Create for "calico-991000" (driver="qemu2")
	I1003 17:39:20.985488    4504 client.go:168] LocalClient.Create starting
	I1003 17:39:20.985534    4504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:20.985561    4504 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:20.985578    4504 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:20.985611    4504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:20.985628    4504 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:20.985637    4504 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:20.985948    4504 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:21.095276    4504 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:21.262792    4504 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:21.262800    4504 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:21.263001    4504 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2
	I1003 17:39:21.272037    4504 main.go:141] libmachine: STDOUT: 
	I1003 17:39:21.272052    4504 main.go:141] libmachine: STDERR: 
	I1003 17:39:21.272101    4504 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2 +20000M
	I1003 17:39:21.279480    4504 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:21.279503    4504 main.go:141] libmachine: STDERR: 
	I1003 17:39:21.279529    4504 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2
	I1003 17:39:21.279534    4504 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:21.279575    4504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:91:d4:34:56:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2
	I1003 17:39:21.281277    4504 main.go:141] libmachine: STDOUT: 
	I1003 17:39:21.281294    4504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:21.281323    4504 client.go:171] LocalClient.Create took 295.834917ms
	I1003 17:39:23.283457    4504 start.go:128] duration metric: createHost completed in 2.320991875s
	I1003 17:39:23.283525    4504 start.go:83] releasing machines lock for "calico-991000", held for 2.321111833s
	W1003 17:39:23.283600    4504 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:23.290836    4504 out.go:177] * Deleting "calico-991000" in qemu2 ...
	W1003 17:39:23.311167    4504 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:23.311192    4504 start.go:703] Will try again in 5 seconds ...
	I1003 17:39:28.313271    4504 start.go:365] acquiring machines lock for calico-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:28.313813    4504 start.go:369] acquired machines lock for "calico-991000" in 435.542µs
	I1003 17:39:28.313934    4504 start.go:93] Provisioning new machine with config: &{Name:calico-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:28.314365    4504 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:28.324917    4504 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:28.375674    4504 start.go:159] libmachine.API.Create for "calico-991000" (driver="qemu2")
	I1003 17:39:28.375712    4504 client.go:168] LocalClient.Create starting
	I1003 17:39:28.375822    4504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:28.375868    4504 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:28.375883    4504 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:28.375945    4504 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:28.375979    4504 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:28.375991    4504 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:28.376485    4504 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:28.501828    4504 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:28.571064    4504 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:28.571070    4504 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:28.571241    4504 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2
	I1003 17:39:28.580203    4504 main.go:141] libmachine: STDOUT: 
	I1003 17:39:28.580231    4504 main.go:141] libmachine: STDERR: 
	I1003 17:39:28.580296    4504 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2 +20000M
	I1003 17:39:28.587748    4504 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:28.587766    4504 main.go:141] libmachine: STDERR: 
	I1003 17:39:28.587777    4504 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2
	I1003 17:39:28.587785    4504 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:28.587831    4504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d7:6a:a7:5a:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/calico-991000/disk.qcow2
	I1003 17:39:28.589482    4504 main.go:141] libmachine: STDOUT: 
	I1003 17:39:28.589501    4504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:28.589513    4504 client.go:171] LocalClient.Create took 213.8005ms
	I1003 17:39:30.591676    4504 start.go:128] duration metric: createHost completed in 2.277321417s
	I1003 17:39:30.591767    4504 start.go:83] releasing machines lock for "calico-991000", held for 2.277976083s
	W1003 17:39:30.592177    4504 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:30.601804    4504 out.go:177] 
	W1003 17:39:30.606933    4504 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:39:30.606961    4504 out.go:239] * 
	* 
	W1003 17:39:30.609552    4504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:39:30.619793    4504 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.900532666s)

                                                
                                                
-- stdout --
	* [custom-flannel-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-991000 in cluster custom-flannel-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:39:32.984168    4626 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:39:32.984308    4626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:32.984316    4626 out.go:309] Setting ErrFile to fd 2...
	I1003 17:39:32.984318    4626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:32.984447    4626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:39:32.985540    4626 out.go:303] Setting JSON to false
	I1003 17:39:33.001495    4626 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2346,"bootTime":1696377626,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:39:33.001581    4626 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:39:33.007027    4626 out.go:177] * [custom-flannel-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:39:33.013985    4626 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:39:33.017992    4626 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:39:33.014062    4626 notify.go:220] Checking for updates...
	I1003 17:39:33.023940    4626 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:39:33.026969    4626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:39:33.029921    4626 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:39:33.032958    4626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:39:33.036362    4626 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:39:33.036402    4626 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:39:33.040895    4626 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:39:33.047941    4626 start.go:298] selected driver: qemu2
	I1003 17:39:33.047947    4626 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:39:33.047952    4626 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:39:33.050458    4626 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:39:33.052897    4626 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:39:33.056001    4626 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:39:33.056019    4626 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1003 17:39:33.056026    4626 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1003 17:39:33.056031    4626 start_flags.go:321] config:
	{Name:custom-flannel-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:39:33.060489    4626 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:39:33.067926    4626 out.go:177] * Starting control plane node custom-flannel-991000 in cluster custom-flannel-991000
	I1003 17:39:33.071891    4626 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:39:33.071905    4626 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:39:33.071915    4626 cache.go:57] Caching tarball of preloaded images
	I1003 17:39:33.071969    4626 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:39:33.071975    4626 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:39:33.072032    4626 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/custom-flannel-991000/config.json ...
	I1003 17:39:33.072044    4626 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/custom-flannel-991000/config.json: {Name:mka6276525e2dd8ce7f36d1b23f0b1e189a5360f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:39:33.072243    4626 start.go:365] acquiring machines lock for custom-flannel-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:33.072276    4626 start.go:369] acquired machines lock for "custom-flannel-991000" in 25.459µs
	I1003 17:39:33.072289    4626 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:33.072322    4626 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:33.079928    4626 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:33.096612    4626 start.go:159] libmachine.API.Create for "custom-flannel-991000" (driver="qemu2")
	I1003 17:39:33.096642    4626 client.go:168] LocalClient.Create starting
	I1003 17:39:33.096693    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:33.096723    4626 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:33.096732    4626 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:33.096768    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:33.096786    4626 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:33.096792    4626 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:33.097115    4626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:33.207166    4626 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:33.402998    4626 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:33.403009    4626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:33.403193    4626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2
	I1003 17:39:33.412474    4626 main.go:141] libmachine: STDOUT: 
	I1003 17:39:33.412491    4626 main.go:141] libmachine: STDERR: 
	I1003 17:39:33.412563    4626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2 +20000M
	I1003 17:39:33.420033    4626 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:33.420044    4626 main.go:141] libmachine: STDERR: 
	I1003 17:39:33.420063    4626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2
	I1003 17:39:33.420069    4626 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:33.420103    4626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a4:6c:56:0a:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2
	I1003 17:39:33.421751    4626 main.go:141] libmachine: STDOUT: 
	I1003 17:39:33.421769    4626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:33.421790    4626 client.go:171] LocalClient.Create took 325.147834ms
	I1003 17:39:35.423921    4626 start.go:128] duration metric: createHost completed in 2.351625875s
	I1003 17:39:35.423970    4626 start.go:83] releasing machines lock for "custom-flannel-991000", held for 2.351733s
	W1003 17:39:35.423995    4626 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:35.430452    4626 out.go:177] * Deleting "custom-flannel-991000" in qemu2 ...
	W1003 17:39:35.451733    4626 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:35.451770    4626 start.go:703] Will try again in 5 seconds ...
	I1003 17:39:40.453918    4626 start.go:365] acquiring machines lock for custom-flannel-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:40.454448    4626 start.go:369] acquired machines lock for "custom-flannel-991000" in 429.166µs
	I1003 17:39:40.454615    4626 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:40.454969    4626 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:40.464595    4626 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:40.513226    4626 start.go:159] libmachine.API.Create for "custom-flannel-991000" (driver="qemu2")
	I1003 17:39:40.513271    4626 client.go:168] LocalClient.Create starting
	I1003 17:39:40.513390    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:40.513446    4626 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:40.513469    4626 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:40.513530    4626 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:40.513564    4626 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:40.513579    4626 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:40.514077    4626 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:40.634137    4626 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:40.796198    4626 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:40.796207    4626 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:40.796375    4626 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2
	I1003 17:39:40.805260    4626 main.go:141] libmachine: STDOUT: 
	I1003 17:39:40.805274    4626 main.go:141] libmachine: STDERR: 
	I1003 17:39:40.805322    4626 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2 +20000M
	I1003 17:39:40.812657    4626 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:40.812670    4626 main.go:141] libmachine: STDERR: 
	I1003 17:39:40.812681    4626 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2
	I1003 17:39:40.812687    4626 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:40.812734    4626 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:85:58:49:9c:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/custom-flannel-991000/disk.qcow2
	I1003 17:39:40.814331    4626 main.go:141] libmachine: STDOUT: 
	I1003 17:39:40.814344    4626 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:40.814358    4626 client.go:171] LocalClient.Create took 301.088542ms
	I1003 17:39:42.816545    4626 start.go:128] duration metric: createHost completed in 2.361581708s
	I1003 17:39:42.816646    4626 start.go:83] releasing machines lock for "custom-flannel-991000", held for 2.3622045s
	W1003 17:39:42.817032    4626 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:42.826851    4626 out.go:177] 
	W1003 17:39:42.831856    4626 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:39:42.831929    4626 out.go:239] * 
	* 
	W1003 17:39:42.834798    4626 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:39:42.844879    4626 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.7149755s)

                                                
                                                
-- stdout --
	* [false-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-991000 in cluster false-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:39:45.220696    4757 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:39:45.220842    4757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:45.220845    4757 out.go:309] Setting ErrFile to fd 2...
	I1003 17:39:45.220848    4757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:45.220978    4757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:39:45.222022    4757 out.go:303] Setting JSON to false
	I1003 17:39:45.238251    4757 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2359,"bootTime":1696377626,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:39:45.238344    4757 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:39:45.243134    4757 out.go:177] * [false-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:39:45.250109    4757 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:39:45.250180    4757 notify.go:220] Checking for updates...
	I1003 17:39:45.258152    4757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:39:45.261997    4757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:39:45.265064    4757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:39:45.268083    4757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:39:45.270997    4757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:39:45.274427    4757 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:39:45.274478    4757 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:39:45.279040    4757 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:39:45.286086    4757 start.go:298] selected driver: qemu2
	I1003 17:39:45.286094    4757 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:39:45.286102    4757 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:39:45.288494    4757 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:39:45.292011    4757 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:39:45.295175    4757 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:39:45.295197    4757 cni.go:84] Creating CNI manager for "false"
	I1003 17:39:45.295200    4757 start_flags.go:321] config:
	{Name:false-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:false-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I1003 17:39:45.299881    4757 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:39:45.307111    4757 out.go:177] * Starting control plane node false-991000 in cluster false-991000
	I1003 17:39:45.311002    4757 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:39:45.311026    4757 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:39:45.311045    4757 cache.go:57] Caching tarball of preloaded images
	I1003 17:39:45.311118    4757 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:39:45.311124    4757 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:39:45.311212    4757 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/false-991000/config.json ...
	I1003 17:39:45.311224    4757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/false-991000/config.json: {Name:mk52e6eb5fdc86ad97404d4d5551415f81382632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:39:45.311446    4757 start.go:365] acquiring machines lock for false-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:45.311477    4757 start.go:369] acquired machines lock for "false-991000" in 25.041µs
	I1003 17:39:45.311489    4757 start.go:93] Provisioning new machine with config: &{Name:false-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:45.311524    4757 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:45.319059    4757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:45.335922    4757 start.go:159] libmachine.API.Create for "false-991000" (driver="qemu2")
	I1003 17:39:45.335946    4757 client.go:168] LocalClient.Create starting
	I1003 17:39:45.335995    4757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:45.336023    4757 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:45.336038    4757 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:45.336073    4757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:45.336092    4757 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:45.336099    4757 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:45.336434    4757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:45.447299    4757 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:45.506838    4757 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:45.506847    4757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:45.506986    4757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2
	I1003 17:39:45.515585    4757 main.go:141] libmachine: STDOUT: 
	I1003 17:39:45.515598    4757 main.go:141] libmachine: STDERR: 
	I1003 17:39:45.515654    4757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2 +20000M
	I1003 17:39:45.523435    4757 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:45.523451    4757 main.go:141] libmachine: STDERR: 
	I1003 17:39:45.523469    4757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2
	I1003 17:39:45.523477    4757 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:45.523508    4757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:8f:01:3f:70:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2
	I1003 17:39:45.525210    4757 main.go:141] libmachine: STDOUT: 
	I1003 17:39:45.525223    4757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:45.525243    4757 client.go:171] LocalClient.Create took 189.295791ms
	I1003 17:39:47.527429    4757 start.go:128] duration metric: createHost completed in 2.21590525s
	I1003 17:39:47.527522    4757 start.go:83] releasing machines lock for "false-991000", held for 2.21607825s
	W1003 17:39:47.527566    4757 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:47.536961    4757 out.go:177] * Deleting "false-991000" in qemu2 ...
	W1003 17:39:47.558009    4757 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:47.558048    4757 start.go:703] Will try again in 5 seconds ...
	I1003 17:39:52.560209    4757 start.go:365] acquiring machines lock for false-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:52.560665    4757 start.go:369] acquired machines lock for "false-991000" in 349.583µs
	I1003 17:39:52.560809    4757 start.go:93] Provisioning new machine with config: &{Name:false-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:52.561115    4757 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:52.570905    4757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:52.617824    4757 start.go:159] libmachine.API.Create for "false-991000" (driver="qemu2")
	I1003 17:39:52.617868    4757 client.go:168] LocalClient.Create starting
	I1003 17:39:52.617973    4757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:52.618025    4757 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:52.618044    4757 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:52.618107    4757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:52.618140    4757 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:52.618155    4757 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:52.618767    4757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:52.741205    4757 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:52.846141    4757 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:52.846146    4757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:52.846297    4757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2
	I1003 17:39:52.855119    4757 main.go:141] libmachine: STDOUT: 
	I1003 17:39:52.855133    4757 main.go:141] libmachine: STDERR: 
	I1003 17:39:52.855180    4757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2 +20000M
	I1003 17:39:52.862650    4757 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:52.862662    4757 main.go:141] libmachine: STDERR: 
	I1003 17:39:52.862675    4757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2
	I1003 17:39:52.862680    4757 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:52.862716    4757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:ac:88:8f:57:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/false-991000/disk.qcow2
	I1003 17:39:52.864411    4757 main.go:141] libmachine: STDOUT: 
	I1003 17:39:52.864424    4757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:52.864436    4757 client.go:171] LocalClient.Create took 246.567708ms
	I1003 17:39:54.866577    4757 start.go:128] duration metric: createHost completed in 2.305468375s
	I1003 17:39:54.866635    4757 start.go:83] releasing machines lock for "false-991000", held for 2.305990583s
	W1003 17:39:54.866994    4757 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:54.877750    4757 out.go:177] 
	W1003 17:39:54.881652    4757 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:39:54.881680    4757 out.go:239] * 
	* 
	W1003 17:39:54.884443    4757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:39:54.894622    4757 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.907846583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-991000 in cluster enable-default-cni-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:39:57.085395    4874 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:39:57.085560    4874 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:57.085563    4874 out.go:309] Setting ErrFile to fd 2...
	I1003 17:39:57.085566    4874 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:39:57.085695    4874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:39:57.086691    4874 out.go:303] Setting JSON to false
	I1003 17:39:57.102766    4874 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2371,"bootTime":1696377626,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:39:57.102836    4874 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:39:57.108090    4874 out.go:177] * [enable-default-cni-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:39:57.115109    4874 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:39:57.119038    4874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:39:57.115197    4874 notify.go:220] Checking for updates...
	I1003 17:39:57.124925    4874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:39:57.128021    4874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:39:57.131070    4874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:39:57.132519    4874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:39:57.136409    4874 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:39:57.136454    4874 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:39:57.141031    4874 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:39:57.146023    4874 start.go:298] selected driver: qemu2
	I1003 17:39:57.146031    4874 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:39:57.146043    4874 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:39:57.148319    4874 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:39:57.151037    4874 out.go:177] * Automatically selected the socket_vmnet network
	E1003 17:39:57.154137    4874 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1003 17:39:57.154152    4874 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:39:57.154180    4874 cni.go:84] Creating CNI manager for "bridge"
	I1003 17:39:57.154183    4874 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:39:57.154188    4874 start_flags.go:321] config:
	{Name:enable-default-cni-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:39:57.158533    4874 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:39:57.166078    4874 out.go:177] * Starting control plane node enable-default-cni-991000 in cluster enable-default-cni-991000
	I1003 17:39:57.169940    4874 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:39:57.169959    4874 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:39:57.169970    4874 cache.go:57] Caching tarball of preloaded images
	I1003 17:39:57.170030    4874 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:39:57.170036    4874 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:39:57.170092    4874 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/enable-default-cni-991000/config.json ...
	I1003 17:39:57.170102    4874 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/enable-default-cni-991000/config.json: {Name:mk413f68d7c50a5ab184814c9e3dfd79e410926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:39:57.170299    4874 start.go:365] acquiring machines lock for enable-default-cni-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:39:57.170332    4874 start.go:369] acquired machines lock for "enable-default-cni-991000" in 26.25µs
	I1003 17:39:57.170359    4874 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:39:57.170385    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:39:57.179036    4874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:39:57.194909    4874 start.go:159] libmachine.API.Create for "enable-default-cni-991000" (driver="qemu2")
	I1003 17:39:57.194932    4874 client.go:168] LocalClient.Create starting
	I1003 17:39:57.194987    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:39:57.195013    4874 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:57.195024    4874 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:57.195057    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:39:57.195074    4874 main.go:141] libmachine: Decoding PEM data...
	I1003 17:39:57.195081    4874 main.go:141] libmachine: Parsing certificate...
	I1003 17:39:57.195411    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:39:57.304640    4874 main.go:141] libmachine: Creating SSH key...
	I1003 17:39:57.572548    4874 main.go:141] libmachine: Creating Disk image...
	I1003 17:39:57.572559    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:39:57.572783    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2
	I1003 17:39:57.582429    4874 main.go:141] libmachine: STDOUT: 
	I1003 17:39:57.582449    4874 main.go:141] libmachine: STDERR: 
	I1003 17:39:57.582511    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2 +20000M
	I1003 17:39:57.590086    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:39:57.590099    4874 main.go:141] libmachine: STDERR: 
	I1003 17:39:57.590120    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2
	I1003 17:39:57.590127    4874 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:39:57.590165    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:8e:30:5f:c3:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2
	I1003 17:39:57.591734    4874 main.go:141] libmachine: STDOUT: 
	I1003 17:39:57.591747    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:39:57.591768    4874 client.go:171] LocalClient.Create took 396.837625ms
	I1003 17:39:59.593992    4874 start.go:128] duration metric: createHost completed in 2.423617958s
	I1003 17:39:59.594078    4874 start.go:83] releasing machines lock for "enable-default-cni-991000", held for 2.423780958s
	W1003 17:39:59.594126    4874 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:59.605587    4874 out.go:177] * Deleting "enable-default-cni-991000" in qemu2 ...
	W1003 17:39:59.625499    4874 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:39:59.625525    4874 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:04.627709    4874 start.go:365] acquiring machines lock for enable-default-cni-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:04.628186    4874 start.go:369] acquired machines lock for "enable-default-cni-991000" in 392.208µs
	I1003 17:40:04.628334    4874 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:04.628648    4874 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:04.637218    4874 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:04.686948    4874 start.go:159] libmachine.API.Create for "enable-default-cni-991000" (driver="qemu2")
	I1003 17:40:04.686992    4874 client.go:168] LocalClient.Create starting
	I1003 17:40:04.687113    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:04.687179    4874 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:04.687201    4874 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:04.687264    4874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:04.687310    4874 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:04.687328    4874 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:04.687882    4874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:04.814116    4874 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:04.906092    4874 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:04.906104    4874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:04.906263    4874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2
	I1003 17:40:04.915396    4874 main.go:141] libmachine: STDOUT: 
	I1003 17:40:04.915410    4874 main.go:141] libmachine: STDERR: 
	I1003 17:40:04.915464    4874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2 +20000M
	I1003 17:40:04.922899    4874 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:04.922921    4874 main.go:141] libmachine: STDERR: 
	I1003 17:40:04.922937    4874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2
	I1003 17:40:04.922944    4874 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:04.922980    4874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:52:42:70:1e:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/enable-default-cni-991000/disk.qcow2
	I1003 17:40:04.924663    4874 main.go:141] libmachine: STDOUT: 
	I1003 17:40:04.924675    4874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:04.924690    4874 client.go:171] LocalClient.Create took 237.697125ms
	I1003 17:40:06.926926    4874 start.go:128] duration metric: createHost completed in 2.298287042s
	I1003 17:40:06.927000    4874 start.go:83] releasing machines lock for "enable-default-cni-991000", held for 2.298832292s
	W1003 17:40:06.927458    4874 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:06.937195    4874 out.go:177] 
	W1003 17:40:06.941265    4874 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:06.941289    4874 out.go:239] * 
	* 
	W1003 17:40:06.943721    4874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:06.952185    4874 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.692421542s)

                                                
                                                
-- stdout --
	* [flannel-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-991000 in cluster flannel-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:09.142310    4987 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:09.142467    4987 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:09.142469    4987 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:09.142472    4987 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:09.142603    4987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:09.143626    4987 out.go:303] Setting JSON to false
	I1003 17:40:09.159611    4987 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2383,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:09.159688    4987 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:09.165385    4987 out.go:177] * [flannel-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:09.172473    4987 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:09.176384    4987 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:09.172535    4987 notify.go:220] Checking for updates...
	I1003 17:40:09.179410    4987 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:09.182418    4987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:09.185438    4987 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:09.188420    4987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:09.191839    4987 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:09.191884    4987 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:09.196390    4987 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:09.203463    4987 start.go:298] selected driver: qemu2
	I1003 17:40:09.203472    4987 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:09.203478    4987 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:09.205783    4987 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:09.208347    4987 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:09.211481    4987 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:09.211503    4987 cni.go:84] Creating CNI manager for "flannel"
	I1003 17:40:09.211507    4987 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1003 17:40:09.211514    4987 start_flags.go:321] config:
	{Name:flannel-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:flannel-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:09.216066    4987 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:09.221387    4987 out.go:177] * Starting control plane node flannel-991000 in cluster flannel-991000
	I1003 17:40:09.225419    4987 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:09.225432    4987 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:40:09.225438    4987 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:09.225488    4987 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:09.225494    4987 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:40:09.225556    4987 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/flannel-991000/config.json ...
	I1003 17:40:09.225567    4987 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/flannel-991000/config.json: {Name:mk13066a0c5417eb6506e1a448f73106ac492909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:09.225760    4987 start.go:365] acquiring machines lock for flannel-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:09.225790    4987 start.go:369] acquired machines lock for "flannel-991000" in 23.667µs
	I1003 17:40:09.225801    4987 start.go:93] Provisioning new machine with config: &{Name:flannel-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:09.225831    4987 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:09.234547    4987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:09.250700    4987 start.go:159] libmachine.API.Create for "flannel-991000" (driver="qemu2")
	I1003 17:40:09.250724    4987 client.go:168] LocalClient.Create starting
	I1003 17:40:09.250772    4987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:09.250799    4987 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:09.250808    4987 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:09.250843    4987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:09.250860    4987 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:09.250867    4987 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:09.251177    4987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:09.362089    4987 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:09.411364    4987 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:09.411373    4987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:09.411515    4987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2
	I1003 17:40:09.420303    4987 main.go:141] libmachine: STDOUT: 
	I1003 17:40:09.420324    4987 main.go:141] libmachine: STDERR: 
	I1003 17:40:09.420383    4987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2 +20000M
	I1003 17:40:09.427843    4987 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:09.427863    4987 main.go:141] libmachine: STDERR: 
	I1003 17:40:09.427878    4987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2
	I1003 17:40:09.427883    4987 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:09.427918    4987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:19:3b:ca:2e:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2
	I1003 17:40:09.429585    4987 main.go:141] libmachine: STDOUT: 
	I1003 17:40:09.429614    4987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:09.429640    4987 client.go:171] LocalClient.Create took 178.915416ms
	I1003 17:40:11.431818    4987 start.go:128] duration metric: createHost completed in 2.205997333s
	I1003 17:40:11.431917    4987 start.go:83] releasing machines lock for "flannel-991000", held for 2.206160167s
	W1003 17:40:11.431982    4987 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:11.441192    4987 out.go:177] * Deleting "flannel-991000" in qemu2 ...
	W1003 17:40:11.461434    4987 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:11.461464    4987 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:16.463666    4987 start.go:365] acquiring machines lock for flannel-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:16.464103    4987 start.go:369] acquired machines lock for "flannel-991000" in 336.459µs
	I1003 17:40:16.464245    4987 start.go:93] Provisioning new machine with config: &{Name:flannel-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:16.464496    4987 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:16.473043    4987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:16.521296    4987 start.go:159] libmachine.API.Create for "flannel-991000" (driver="qemu2")
	I1003 17:40:16.521338    4987 client.go:168] LocalClient.Create starting
	I1003 17:40:16.521449    4987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:16.521500    4987 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:16.521517    4987 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:16.521574    4987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:16.521607    4987 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:16.521619    4987 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:16.522504    4987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:16.645300    4987 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:16.738484    4987 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:16.738490    4987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:16.738645    4987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2
	I1003 17:40:16.747708    4987 main.go:141] libmachine: STDOUT: 
	I1003 17:40:16.747723    4987 main.go:141] libmachine: STDERR: 
	I1003 17:40:16.747777    4987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2 +20000M
	I1003 17:40:16.755268    4987 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:16.755281    4987 main.go:141] libmachine: STDERR: 
	I1003 17:40:16.755297    4987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2
	I1003 17:40:16.755305    4987 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:16.755359    4987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:40:12:5e:61:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/flannel-991000/disk.qcow2
	I1003 17:40:16.757017    4987 main.go:141] libmachine: STDOUT: 
	I1003 17:40:16.757031    4987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:16.757043    4987 client.go:171] LocalClient.Create took 235.704416ms
	I1003 17:40:18.759202    4987 start.go:128] duration metric: createHost completed in 2.294719541s
	I1003 17:40:18.759276    4987 start.go:83] releasing machines lock for "flannel-991000", held for 2.2951935s
	W1003 17:40:18.759677    4987 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:18.771368    4987 out.go:177] 
	W1003 17:40:18.783074    4987 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:18.783100    4987 out.go:239] * 
	* 
	W1003 17:40:18.785839    4987 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:18.793389    4987 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
E1003 17:40:24.568674    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.733150792s)

                                                
                                                
-- stdout --
	* [bridge-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-991000 in cluster bridge-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:21.172951    5108 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:21.173114    5108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:21.173118    5108 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:21.173120    5108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:21.173259    5108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:21.174268    5108 out.go:303] Setting JSON to false
	I1003 17:40:21.190261    5108 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2395,"bootTime":1696377626,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:21.190361    5108 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:21.196270    5108 out.go:177] * [bridge-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:21.204295    5108 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:21.204335    5108 notify.go:220] Checking for updates...
	I1003 17:40:21.208230    5108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:21.211229    5108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:21.214263    5108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:21.217152    5108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:21.220196    5108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:21.223526    5108 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:21.223573    5108 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:21.228199    5108 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:21.235306    5108 start.go:298] selected driver: qemu2
	I1003 17:40:21.235314    5108 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:21.235321    5108 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:21.237641    5108 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:21.241223    5108 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:21.244307    5108 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:21.244342    5108 cni.go:84] Creating CNI manager for "bridge"
	I1003 17:40:21.244346    5108 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:40:21.244353    5108 start_flags.go:321] config:
	{Name:bridge-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:bridge-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:21.248916    5108 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:21.256209    5108 out.go:177] * Starting control plane node bridge-991000 in cluster bridge-991000
	I1003 17:40:21.260228    5108 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:21.260242    5108 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:40:21.260248    5108 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:21.260307    5108 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:21.260313    5108 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:40:21.260372    5108 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/bridge-991000/config.json ...
	I1003 17:40:21.260386    5108 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/bridge-991000/config.json: {Name:mke7be397835b481e29463470c4af62d1648ec2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:21.260600    5108 start.go:365] acquiring machines lock for bridge-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:21.260632    5108 start.go:369] acquired machines lock for "bridge-991000" in 26.375µs
	I1003 17:40:21.260644    5108 start.go:93] Provisioning new machine with config: &{Name:bridge-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:21.260690    5108 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:21.269231    5108 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:21.286067    5108 start.go:159] libmachine.API.Create for "bridge-991000" (driver="qemu2")
	I1003 17:40:21.286093    5108 client.go:168] LocalClient.Create starting
	I1003 17:40:21.286165    5108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:21.286191    5108 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:21.286200    5108 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:21.286234    5108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:21.286252    5108 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:21.286263    5108 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:21.286637    5108 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:21.417599    5108 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:21.532248    5108 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:21.532254    5108 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:21.532397    5108 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2
	I1003 17:40:21.541111    5108 main.go:141] libmachine: STDOUT: 
	I1003 17:40:21.541126    5108 main.go:141] libmachine: STDERR: 
	I1003 17:40:21.541171    5108 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2 +20000M
	I1003 17:40:21.548607    5108 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:21.548619    5108 main.go:141] libmachine: STDERR: 
	I1003 17:40:21.548639    5108 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2
	I1003 17:40:21.548645    5108 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:21.548679    5108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5a:a0:64:be:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2
	I1003 17:40:21.550352    5108 main.go:141] libmachine: STDOUT: 
	I1003 17:40:21.550366    5108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:21.550386    5108 client.go:171] LocalClient.Create took 264.293042ms
	I1003 17:40:23.552526    5108 start.go:128] duration metric: createHost completed in 2.291863084s
	I1003 17:40:23.552600    5108 start.go:83] releasing machines lock for "bridge-991000", held for 2.292005291s
	W1003 17:40:23.552652    5108 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:23.563990    5108 out.go:177] * Deleting "bridge-991000" in qemu2 ...
	W1003 17:40:23.585338    5108 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:23.585367    5108 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:28.587521    5108 start.go:365] acquiring machines lock for bridge-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:28.587921    5108 start.go:369] acquired machines lock for "bridge-991000" in 288.75µs
	I1003 17:40:28.588051    5108 start.go:93] Provisioning new machine with config: &{Name:bridge-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:28.588382    5108 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:28.598142    5108 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:28.644740    5108 start.go:159] libmachine.API.Create for "bridge-991000" (driver="qemu2")
	I1003 17:40:28.644794    5108 client.go:168] LocalClient.Create starting
	I1003 17:40:28.644888    5108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:28.644941    5108 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:28.644959    5108 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:28.645019    5108 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:28.645055    5108 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:28.645066    5108 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:28.645525    5108 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:28.769223    5108 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:28.818497    5108 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:28.818506    5108 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:28.818657    5108 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2
	I1003 17:40:28.827392    5108 main.go:141] libmachine: STDOUT: 
	I1003 17:40:28.827408    5108 main.go:141] libmachine: STDERR: 
	I1003 17:40:28.827457    5108 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2 +20000M
	I1003 17:40:28.834902    5108 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:28.834915    5108 main.go:141] libmachine: STDERR: 
	I1003 17:40:28.834930    5108 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2
	I1003 17:40:28.834935    5108 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:28.834978    5108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:81:29:fe:35:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/bridge-991000/disk.qcow2
	I1003 17:40:28.836590    5108 main.go:141] libmachine: STDOUT: 
	I1003 17:40:28.836604    5108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:28.836616    5108 client.go:171] LocalClient.Create took 191.820625ms
	I1003 17:40:30.838817    5108 start.go:128] duration metric: createHost completed in 2.250440834s
	I1003 17:40:30.838913    5108 start.go:83] releasing machines lock for "bridge-991000", held for 2.251013375s
	W1003 17:40:30.839328    5108 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:30.849156    5108 out.go:177] 
	W1003 17:40:30.853215    5108 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:30.853242    5108 out.go:239] * 
	* 
	W1003 17:40:30.855770    5108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:30.866145    5108 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (3.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe: permission denied (1.957292ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe: permission denied (7.657833ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe: permission denied (7.589125ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3389871420.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-991000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.673231958s)

                                                
                                                
-- stdout --
	* [kubenet-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-991000 in cluster kubenet-991000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:33.037493    5224 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:33.037627    5224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:33.037630    5224 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:33.037633    5224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:33.037770    5224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:33.038827    5224 out.go:303] Setting JSON to false
	I1003 17:40:33.054904    5224 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2407,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:33.054984    5224 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:33.060566    5224 out.go:177] * [kubenet-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:33.068485    5224 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:33.072478    5224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:33.068554    5224 notify.go:220] Checking for updates...
	I1003 17:40:33.078491    5224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:33.081496    5224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:33.084460    5224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:33.087550    5224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:33.090780    5224 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:33.090827    5224 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:33.095456    5224 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:33.102461    5224 start.go:298] selected driver: qemu2
	I1003 17:40:33.102468    5224 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:33.102474    5224 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:33.104865    5224 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:33.107476    5224 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:33.110545    5224 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:33.110571    5224 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1003 17:40:33.110576    5224 start_flags.go:321] config:
	{Name:kubenet-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:33.115186    5224 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:33.122451    5224 out.go:177] * Starting control plane node kubenet-991000 in cluster kubenet-991000
	I1003 17:40:33.126456    5224 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:33.126473    5224 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:40:33.126483    5224 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:33.126533    5224 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:33.126538    5224 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:40:33.126587    5224 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubenet-991000/config.json ...
	I1003 17:40:33.126598    5224 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubenet-991000/config.json: {Name:mk900a11d73a909c4ceab3b0522399c9c2ef1d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:33.126798    5224 start.go:365] acquiring machines lock for kubenet-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:33.126826    5224 start.go:369] acquired machines lock for "kubenet-991000" in 22.75µs
	I1003 17:40:33.126837    5224 start.go:93] Provisioning new machine with config: &{Name:kubenet-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:33.126873    5224 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:33.135484    5224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:33.151637    5224 start.go:159] libmachine.API.Create for "kubenet-991000" (driver="qemu2")
	I1003 17:40:33.151667    5224 client.go:168] LocalClient.Create starting
	I1003 17:40:33.151719    5224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:33.151746    5224 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:33.151759    5224 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:33.151795    5224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:33.151813    5224 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:33.151819    5224 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:33.152132    5224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:33.262521    5224 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:33.299795    5224 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:33.299801    5224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:33.299951    5224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:33.308779    5224 main.go:141] libmachine: STDOUT: 
	I1003 17:40:33.308804    5224 main.go:141] libmachine: STDERR: 
	I1003 17:40:33.308874    5224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2 +20000M
	I1003 17:40:33.316340    5224 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:33.316359    5224 main.go:141] libmachine: STDERR: 
	I1003 17:40:33.316376    5224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:33.316385    5224 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:33.316447    5224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ac:85:98:92:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:33.318076    5224 main.go:141] libmachine: STDOUT: 
	I1003 17:40:33.318097    5224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:33.318115    5224 client.go:171] LocalClient.Create took 166.446ms
	I1003 17:40:35.320276    5224 start.go:128] duration metric: createHost completed in 2.1934245s
	I1003 17:40:35.320343    5224 start.go:83] releasing machines lock for "kubenet-991000", held for 2.193551625s
	W1003 17:40:35.320389    5224 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:35.334491    5224 out.go:177] * Deleting "kubenet-991000" in qemu2 ...
	W1003 17:40:35.354543    5224 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:35.354571    5224 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:40.356722    5224 start.go:365] acquiring machines lock for kubenet-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:40.357265    5224 start.go:369] acquired machines lock for "kubenet-991000" in 409.375µs
	I1003 17:40:40.357417    5224 start.go:93] Provisioning new machine with config: &{Name:kubenet-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:40.357720    5224 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:40.363461    5224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:40.414394    5224 start.go:159] libmachine.API.Create for "kubenet-991000" (driver="qemu2")
	I1003 17:40:40.414437    5224 client.go:168] LocalClient.Create starting
	I1003 17:40:40.414544    5224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:40.414602    5224 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:40.414623    5224 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:40.414685    5224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:40.414718    5224 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:40.414731    5224 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:40.415184    5224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:40.538519    5224 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:40.622893    5224 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:40.622902    5224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:40.623060    5224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:40.631902    5224 main.go:141] libmachine: STDOUT: 
	I1003 17:40:40.631932    5224 main.go:141] libmachine: STDERR: 
	I1003 17:40:40.631981    5224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2 +20000M
	I1003 17:40:40.639434    5224 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:40.639448    5224 main.go:141] libmachine: STDERR: 
	I1003 17:40:40.639466    5224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:40.639472    5224 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:40.639516    5224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:86:c6:27:9d:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:40.641150    5224 main.go:141] libmachine: STDOUT: 
	I1003 17:40:40.641175    5224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:40.641188    5224 client.go:171] LocalClient.Create took 226.750167ms
	I1003 17:40:42.643369    5224 start.go:128] duration metric: createHost completed in 2.285653917s
	I1003 17:40:42.643463    5224 start.go:83] releasing machines lock for "kubenet-991000", held for 2.286218542s
	W1003 17:40:42.643844    5224 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:42.653634    5224 out.go:177] 
	W1003 17:40:42.657539    5224 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:42.657566    5224 out.go:239] * 
	* 
	W1003 17:40:42.660123    5224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:42.669606    5224 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-363000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-363000: exit status 85 (116.389667ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo docker                        | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo cat                           | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo                               | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo find                          | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p flannel-991000 sudo crio                          | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p flannel-991000                                    | flannel-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT | 03 Oct 23 17:40 PDT |
	| start   | -p bridge-991000 --memory=3072                       | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo crictl                         | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo crictl                         | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | ps --all                                             |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo find                           | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo ip a s                         | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	| ssh     | -p bridge-991000 sudo ip r s                         | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | iptables-save                                        |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo iptables                       | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | -t nat -L -n -v                                      |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo docker                         | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo cat                            | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo                                | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo find                           | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p bridge-991000 sudo crio                           | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p bridge-991000                                     | bridge-991000  | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT | 03 Oct 23 17:40 PDT |
	| start   | -p kubenet-991000                                    | kubenet-991000 | jenkins | v1.31.2 | 03 Oct 23 17:40 PDT |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                |         |         |                     |                     |
	|         | --driver=qemu2                                       |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:40:33
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:40:33.037493    5224 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:33.037627    5224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:33.037630    5224 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:33.037633    5224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:33.037770    5224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:33.038827    5224 out.go:303] Setting JSON to false
	I1003 17:40:33.054904    5224 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2407,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:33.054984    5224 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:33.060566    5224 out.go:177] * [kubenet-991000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:33.068485    5224 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:33.072478    5224 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:33.068554    5224 notify.go:220] Checking for updates...
	I1003 17:40:33.078491    5224 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:33.081496    5224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:33.084460    5224 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:33.087550    5224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:33.090780    5224 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:33.090827    5224 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:33.095456    5224 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:33.102461    5224 start.go:298] selected driver: qemu2
	I1003 17:40:33.102468    5224 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:33.102474    5224 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:33.104865    5224 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:33.107476    5224 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:33.110545    5224 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:33.110571    5224 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1003 17:40:33.110576    5224 start_flags.go:321] config:
	{Name:kubenet-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:33.115186    5224 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:33.122451    5224 out.go:177] * Starting control plane node kubenet-991000 in cluster kubenet-991000
	I1003 17:40:33.126456    5224 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:33.126473    5224 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:40:33.126483    5224 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:33.126533    5224 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:33.126538    5224 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:40:33.126587    5224 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubenet-991000/config.json ...
	I1003 17:40:33.126598    5224 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/kubenet-991000/config.json: {Name:mk900a11d73a909c4ceab3b0522399c9c2ef1d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:33.126798    5224 start.go:365] acquiring machines lock for kubenet-991000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:33.126826    5224 start.go:369] acquired machines lock for "kubenet-991000" in 22.75µs
	I1003 17:40:33.126837    5224 start.go:93] Provisioning new machine with config: &{Name:kubenet-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-991000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:33.126873    5224 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:33.135484    5224 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 17:40:33.151637    5224 start.go:159] libmachine.API.Create for "kubenet-991000" (driver="qemu2")
	I1003 17:40:33.151667    5224 client.go:168] LocalClient.Create starting
	I1003 17:40:33.151719    5224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:33.151746    5224 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:33.151759    5224 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:33.151795    5224 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:33.151813    5224 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:33.151819    5224 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:33.152132    5224 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:33.262521    5224 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:33.299795    5224 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:33.299801    5224 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:33.299951    5224 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:33.308779    5224 main.go:141] libmachine: STDOUT: 
	I1003 17:40:33.308804    5224 main.go:141] libmachine: STDERR: 
	I1003 17:40:33.308874    5224 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2 +20000M
	I1003 17:40:33.316340    5224 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:33.316359    5224 main.go:141] libmachine: STDERR: 
	I1003 17:40:33.316376    5224 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:33.316385    5224 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:33.316447    5224 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ac:85:98:92:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/kubenet-991000/disk.qcow2
	I1003 17:40:33.318076    5224 main.go:141] libmachine: STDOUT: 
	I1003 17:40:33.318097    5224 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:33.318115    5224 client.go:171] LocalClient.Create took 166.446ms
	I1003 17:40:35.320276    5224 start.go:128] duration metric: createHost completed in 2.1934245s
	I1003 17:40:35.320343    5224 start.go:83] releasing machines lock for "kubenet-991000", held for 2.193551625s
	W1003 17:40:35.320389    5224 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:35.334491    5224 out.go:177] * Deleting "kubenet-991000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-363000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-363000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
E1003 17:40:37.530455    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.727499833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-489000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-489000 in cluster old-k8s-version-489000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-489000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:36.592457    5256 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:36.592616    5256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:36.592619    5256 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:36.592622    5256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:36.592759    5256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:36.593865    5256 out.go:303] Setting JSON to false
	I1003 17:40:36.610020    5256 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2410,"bootTime":1696377626,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:36.610109    5256 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:36.613634    5256 out.go:177] * [old-k8s-version-489000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:36.620763    5256 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:36.620825    5256 notify.go:220] Checking for updates...
	I1003 17:40:36.624620    5256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:36.627637    5256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:36.630659    5256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:36.633580    5256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:36.636628    5256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:36.640103    5256 config.go:182] Loaded profile config "kubenet-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:36.640168    5256 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:36.640216    5256 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:36.644603    5256 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:36.651648    5256 start.go:298] selected driver: qemu2
	I1003 17:40:36.651656    5256 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:36.651663    5256 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:36.654036    5256 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:36.656553    5256 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:36.659702    5256 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:36.659730    5256 cni.go:84] Creating CNI manager for ""
	I1003 17:40:36.659740    5256 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:40:36.659744    5256 start_flags.go:321] config:
	{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:36.664246    5256 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:36.671594    5256 out.go:177] * Starting control plane node old-k8s-version-489000 in cluster old-k8s-version-489000
	I1003 17:40:36.675617    5256 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:40:36.675632    5256 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:40:36.675644    5256 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:36.675696    5256 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:36.675701    5256 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 17:40:36.675767    5256 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/old-k8s-version-489000/config.json ...
	I1003 17:40:36.675777    5256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/old-k8s-version-489000/config.json: {Name:mk06d8e0eed4017a294d5edd004164288bf165c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:36.676000    5256 start.go:365] acquiring machines lock for old-k8s-version-489000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:36.676030    5256 start.go:369] acquired machines lock for "old-k8s-version-489000" in 24.375µs
	I1003 17:40:36.676041    5256 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:36.676074    5256 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:36.683622    5256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:40:36.699485    5256 start.go:159] libmachine.API.Create for "old-k8s-version-489000" (driver="qemu2")
	I1003 17:40:36.699513    5256 client.go:168] LocalClient.Create starting
	I1003 17:40:36.699569    5256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:36.699599    5256 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:36.699610    5256 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:36.699643    5256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:36.699660    5256 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:36.699670    5256 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:36.700023    5256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:36.808444    5256 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:36.887017    5256 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:36.887023    5256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:36.887179    5256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:36.895979    5256 main.go:141] libmachine: STDOUT: 
	I1003 17:40:36.895996    5256 main.go:141] libmachine: STDERR: 
	I1003 17:40:36.896054    5256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2 +20000M
	I1003 17:40:36.903598    5256 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:36.903610    5256 main.go:141] libmachine: STDERR: 
	I1003 17:40:36.903622    5256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:36.903629    5256 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:36.903660    5256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7b:2b:c3:2e:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:36.905326    5256 main.go:141] libmachine: STDOUT: 
	I1003 17:40:36.905341    5256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:36.905359    5256 client.go:171] LocalClient.Create took 205.845166ms
	I1003 17:40:38.907505    5256 start.go:128] duration metric: createHost completed in 2.23145175s
	I1003 17:40:38.907582    5256 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 2.231587042s
	W1003 17:40:38.907635    5256 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:38.915016    5256 out.go:177] * Deleting "old-k8s-version-489000" in qemu2 ...
	W1003 17:40:38.936456    5256 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:38.936489    5256 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:43.938473    5256 start.go:365] acquiring machines lock for old-k8s-version-489000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:43.938580    5256 start.go:369] acquired machines lock for "old-k8s-version-489000" in 78.5µs
	I1003 17:40:43.938601    5256 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:43.938914    5256 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:43.942687    5256 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:40:43.958126    5256 start.go:159] libmachine.API.Create for "old-k8s-version-489000" (driver="qemu2")
	I1003 17:40:43.958151    5256 client.go:168] LocalClient.Create starting
	I1003 17:40:43.958223    5256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:43.958248    5256 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:43.958257    5256 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:43.958291    5256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:43.958305    5256 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:43.958312    5256 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:43.958571    5256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:44.153063    5256 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:44.234997    5256 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:44.235008    5256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:44.235188    5256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:44.245004    5256 main.go:141] libmachine: STDOUT: 
	I1003 17:40:44.245031    5256 main.go:141] libmachine: STDERR: 
	I1003 17:40:44.245086    5256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2 +20000M
	I1003 17:40:44.253594    5256 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:44.253620    5256 main.go:141] libmachine: STDERR: 
	I1003 17:40:44.253633    5256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:44.253642    5256 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:44.253683    5256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:db:7a:43:11:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:44.255484    5256 main.go:141] libmachine: STDOUT: 
	I1003 17:40:44.255505    5256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:44.255517    5256 client.go:171] LocalClient.Create took 297.369125ms
	I1003 17:40:46.257529    5256 start.go:128] duration metric: createHost completed in 2.318653792s
	I1003 17:40:46.257545    5256 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 2.319004875s
	W1003 17:40:46.257628    5256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:46.269829    5256 out.go:177] 
	W1003 17:40:46.274841    5256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:46.274848    5256 out.go:239] * 
	* 
	W1003 17:40:46.275398    5256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:46.284829    5256 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (31.984042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.23932725s)

                                                
                                                
-- stdout --
	* [no-preload-387000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-387000 in cluster no-preload-387000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-387000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:44.892876    5374 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:44.893015    5374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:44.893018    5374 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:44.893020    5374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:44.893155    5374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:44.894189    5374 out.go:303] Setting JSON to false
	I1003 17:40:44.910490    5374 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2418,"bootTime":1696377626,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:44.910558    5374 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:44.915758    5374 out.go:177] * [no-preload-387000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:44.922712    5374 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:44.926678    5374 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:44.922755    5374 notify.go:220] Checking for updates...
	I1003 17:40:44.932634    5374 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:44.935693    5374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:44.938697    5374 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:44.941620    5374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:44.945074    5374 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:44.945142    5374 config.go:182] Loaded profile config "old-k8s-version-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1003 17:40:44.945182    5374 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:44.949632    5374 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:44.956667    5374 start.go:298] selected driver: qemu2
	I1003 17:40:44.956674    5374 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:44.956679    5374 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:44.959052    5374 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:44.963661    5374 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:44.966680    5374 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:44.966703    5374 cni.go:84] Creating CNI manager for ""
	I1003 17:40:44.966710    5374 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:40:44.966714    5374 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:40:44.966721    5374 start_flags.go:321] config:
	{Name:no-preload-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:44.971640    5374 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.978629    5374 out.go:177] * Starting control plane node no-preload-387000 in cluster no-preload-387000
	I1003 17:40:44.982660    5374 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:44.982762    5374 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/no-preload-387000/config.json ...
	I1003 17:40:44.982779    5374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/no-preload-387000/config.json: {Name:mk8e25305bc6b4664df16192720bcaf130349ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:44.982776    5374 cache.go:107] acquiring lock: {Name:mka1cadac3ebecf1c9f0651f202b5f351e41005c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982791    5374 cache.go:107] acquiring lock: {Name:mk41e89125d87e99d3392f0309bfa67012430e6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982838    5374 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 17:40:44.982848    5374 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 73.958µs
	I1003 17:40:44.982856    5374 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 17:40:44.982863    5374 cache.go:107] acquiring lock: {Name:mk9c954df8c40eab9e63135c299422ee7d4595c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982796    5374 cache.go:107] acquiring lock: {Name:mk2d73d400828c89b21b23d1287203b88a4ce158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982940    5374 cache.go:107] acquiring lock: {Name:mk45f4043317dbf23f99ddc39c1b6a2bf7c4986f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982973    5374 cache.go:107] acquiring lock: {Name:mkfa90c45295cc810cd1d94ded01058123307481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982988    5374 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1003 17:40:44.982979    5374 cache.go:107] acquiring lock: {Name:mkae1f3bac64dea55d82d62f9c453ca6652b523c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.983014    5374 cache.go:107] acquiring lock: {Name:mk414acf30c2723183de6339d91dcdffba31737d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:44.982996    5374 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1003 17:40:44.983087    5374 start.go:365] acquiring machines lock for no-preload-387000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:44.983127    5374 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1003 17:40:44.983185    5374 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1003 17:40:44.983199    5374 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1003 17:40:44.983229    5374 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1003 17:40:44.983339    5374 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1003 17:40:44.989217    5374 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1003 17:40:44.989260    5374 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1003 17:40:44.989265    5374 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1003 17:40:44.989305    5374 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1003 17:40:44.989317    5374 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1003 17:40:44.989383    5374 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1003 17:40:44.989473    5374 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1003 17:40:45.602795    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2
	I1003 17:40:45.642746    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I1003 17:40:45.831094    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2
	I1003 17:40:46.055572    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I1003 17:40:46.257588    5374 start.go:369] acquired machines lock for "no-preload-387000" in 1.274513334s
	I1003 17:40:46.257626    5374 start.go:93] Provisioning new machine with config: &{Name:no-preload-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:46.257683    5374 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:46.265787    5374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:40:46.273724    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I1003 17:40:46.281248    5374 start.go:159] libmachine.API.Create for "no-preload-387000" (driver="qemu2")
	I1003 17:40:46.281265    5374 client.go:168] LocalClient.Create starting
	I1003 17:40:46.281323    5374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:46.281350    5374 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:46.281369    5374 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:46.281408    5374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:46.281432    5374 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:46.281442    5374 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:46.289334    5374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:46.419187    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1003 17:40:46.419207    5374 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.436325459s
	I1003 17:40:46.419214    5374 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1003 17:40:46.436066    5374 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:46.499948    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1003 17:40:46.594945    5374 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:46.594955    5374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:46.595450    5374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:46.606480    5374 main.go:141] libmachine: STDOUT: 
	I1003 17:40:46.606491    5374 main.go:141] libmachine: STDERR: 
	I1003 17:40:46.606546    5374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2 +20000M
	I1003 17:40:46.614883    5374 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:46.614898    5374 main.go:141] libmachine: STDERR: 
	I1003 17:40:46.614918    5374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:46.614925    5374 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:46.614967    5374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3f:08:5f:c1:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:46.617015    5374 main.go:141] libmachine: STDOUT: 
	I1003 17:40:46.617042    5374 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:46.617065    5374 client.go:171] LocalClient.Create took 335.8015ms
	I1003 17:40:46.757932    5374 cache.go:162] opening:  /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2
	I1003 17:40:47.555640    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1003 17:40:47.555693    5374 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.572834541s
	I1003 17:40:47.555735    5374 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1003 17:40:48.138529    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I1003 17:40:48.138588    5374 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 3.155780958s
	I1003 17:40:48.138630    5374 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I1003 17:40:48.617312    5374 start.go:128] duration metric: createHost completed in 2.359640917s
	I1003 17:40:48.617373    5374 start.go:83] releasing machines lock for "no-preload-387000", held for 2.359816583s
	W1003 17:40:48.617435    5374 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:48.633885    5374 out.go:177] * Deleting "no-preload-387000" in qemu2 ...
	W1003 17:40:48.659304    5374 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:48.659339    5374 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:49.628399    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I1003 17:40:49.628454    5374 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 4.645641709s
	I1003 17:40:49.628483    5374 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I1003 17:40:50.336254    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I1003 17:40:50.336323    5374 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 5.353644416s
	I1003 17:40:50.336350    5374 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I1003 17:40:50.514137    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I1003 17:40:50.514207    5374 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 5.531530125s
	I1003 17:40:50.514237    5374 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I1003 17:40:53.668686    5374 start.go:365] acquiring machines lock for no-preload-387000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:53.676808    5374 start.go:369] acquired machines lock for "no-preload-387000" in 8.0695ms
	I1003 17:40:53.676873    5374 start.go:93] Provisioning new machine with config: &{Name:no-preload-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:53.677098    5374 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:53.685461    5374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:40:53.729879    5374 start.go:159] libmachine.API.Create for "no-preload-387000" (driver="qemu2")
	I1003 17:40:53.729922    5374 client.go:168] LocalClient.Create starting
	I1003 17:40:53.730056    5374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:53.730126    5374 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:53.730153    5374 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:53.730249    5374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:53.730291    5374 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:53.730307    5374 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:53.730782    5374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:53.859694    5374 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:54.034389    5374 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:54.034399    5374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:54.034567    5374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:54.044138    5374 main.go:141] libmachine: STDOUT: 
	I1003 17:40:54.044158    5374 main.go:141] libmachine: STDERR: 
	I1003 17:40:54.044234    5374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2 +20000M
	I1003 17:40:54.053050    5374 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:54.053065    5374 main.go:141] libmachine: STDERR: 
	I1003 17:40:54.053083    5374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:54.053092    5374 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:54.053148    5374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c9:8d:2a:f1:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:54.055372    5374 main.go:141] libmachine: STDOUT: 
	I1003 17:40:54.055387    5374 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:54.055400    5374 client.go:171] LocalClient.Create took 325.479125ms
	I1003 17:40:55.922559    5374 cache.go:157] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I1003 17:40:55.922628    5374 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 10.939930459s
	I1003 17:40:55.922682    5374 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I1003 17:40:55.922728    5374 cache.go:87] Successfully saved all images to host disk.
	I1003 17:40:56.057544    5374 start.go:128] duration metric: createHost completed in 2.380431667s
	I1003 17:40:56.057645    5374 start.go:83] releasing machines lock for "no-preload-387000", held for 2.38081875s
	W1003 17:40:56.057834    5374 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-387000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-387000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:56.079329    5374 out.go:177] 
	W1003 17:40:56.083498    5374 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:56.083528    5374 out.go:239] * 
	* 
	W1003 17:40:56.086319    5374 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:56.095151    5374 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (49.441542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-489000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-489000 create -f testdata/busybox.yaml: exit status 1 (31.420292ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-489000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (35.25425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (34.969667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-489000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-489000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-489000 describe deploy/metrics-server -n kube-system: exit status 1 (28.498458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-489000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (30.416958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (7.018024708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-489000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-489000 in cluster old-k8s-version-489000
	* Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:46.722333    5504 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:46.722482    5504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:46.722485    5504 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:46.722488    5504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:46.722624    5504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:46.723715    5504 out.go:303] Setting JSON to false
	I1003 17:40:46.740034    5504 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2420,"bootTime":1696377626,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:46.740131    5504 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:46.743608    5504 out.go:177] * [old-k8s-version-489000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:46.755623    5504 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:46.750726    5504 notify.go:220] Checking for updates...
	I1003 17:40:46.762590    5504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:46.769592    5504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:46.777670    5504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:46.785635    5504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:46.792629    5504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:46.796959    5504 config.go:182] Loaded profile config "old-k8s-version-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1003 17:40:46.800606    5504 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1003 17:40:46.804638    5504 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:46.808667    5504 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:40:46.815608    5504 start.go:298] selected driver: qemu2
	I1003 17:40:46.815614    5504 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-489000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:46.815682    5504 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:46.818089    5504 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:46.818123    5504 cni.go:84] Creating CNI manager for ""
	I1003 17:40:46.818130    5504 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:40:46.818134    5504 start_flags.go:321] config:
	{Name:old-k8s-version-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-489000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:46.822620    5504 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:46.830666    5504 out.go:177] * Starting control plane node old-k8s-version-489000 in cluster old-k8s-version-489000
	I1003 17:40:46.834516    5504 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:40:46.834530    5504 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:40:46.834549    5504 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:46.834611    5504 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:46.834619    5504 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 17:40:46.834705    5504 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/old-k8s-version-489000/config.json ...
	I1003 17:40:46.835060    5504 start.go:365] acquiring machines lock for old-k8s-version-489000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:48.617550    5504 start.go:369] acquired machines lock for "old-k8s-version-489000" in 1.782495625s
	I1003 17:40:48.617622    5504 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:40:48.617666    5504 fix.go:54] fixHost starting: 
	I1003 17:40:48.618303    5504 fix.go:102] recreateIfNeeded on old-k8s-version-489000: state=Stopped err=<nil>
	W1003 17:40:48.618350    5504 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:40:48.629009    5504 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	I1003 17:40:48.639118    5504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:db:7a:43:11:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:48.650781    5504 main.go:141] libmachine: STDOUT: 
	I1003 17:40:48.650858    5504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:48.650990    5504 fix.go:56] fixHost completed within 33.328916ms
	I1003 17:40:48.651012    5504 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 33.423625ms
	W1003 17:40:48.651039    5504 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:48.651254    5504 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:48.651271    5504 start.go:703] Will try again in 5 seconds ...
	I1003 17:40:53.651479    5504 start.go:365] acquiring machines lock for old-k8s-version-489000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:53.651991    5504 start.go:369] acquired machines lock for "old-k8s-version-489000" in 375.541µs
	I1003 17:40:53.652148    5504 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:40:53.652170    5504 fix.go:54] fixHost starting: 
	I1003 17:40:53.653008    5504 fix.go:102] recreateIfNeeded on old-k8s-version-489000: state=Stopped err=<nil>
	W1003 17:40:53.653036    5504 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:40:53.658632    5504 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-489000" ...
	I1003 17:40:53.666875    5504 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:db:7a:43:11:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/old-k8s-version-489000/disk.qcow2
	I1003 17:40:53.676508    5504 main.go:141] libmachine: STDOUT: 
	I1003 17:40:53.676571    5504 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:53.676685    5504 fix.go:56] fixHost completed within 24.514416ms
	I1003 17:40:53.676713    5504 start.go:83] releasing machines lock for "old-k8s-version-489000", held for 24.693958ms
	W1003 17:40:53.677021    5504 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-489000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:53.692836    5504 out.go:177] 
	W1003 17:40:53.695607    5504 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:53.695628    5504 out.go:239] * 
	* 
	W1003 17:40:53.697424    5504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:40:53.704584    5504 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-489000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (47.443167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-489000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (33.010291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-489000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.727666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-489000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-489000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (32.403708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-489000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-489000 "sudo crictl images -o json": exit status 89 (46.111584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-489000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-489000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-489000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (28.71625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-489000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-489000 --alsologtostderr -v=1: exit status 89 (43.766625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-489000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:53.961658    5524 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:53.962050    5524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:53.962056    5524 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:53.962059    5524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:53.962198    5524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:53.962408    5524 out.go:303] Setting JSON to false
	I1003 17:40:53.962417    5524 mustload.go:65] Loading cluster: old-k8s-version-489000
	I1003 17:40:53.962599    5524 config.go:182] Loaded profile config "old-k8s-version-489000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1003 17:40:53.966556    5524 out.go:177] * The control plane node must be running for this command
	I1003 17:40:53.974589    5524 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-489000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-489000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (28.417625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (28.561125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-489000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.581082666s)

                                                
                                                
-- stdout --
	* [embed-certs-391000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-391000 in cluster embed-certs-391000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-391000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:54.426478    5550 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:54.426617    5550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:54.426622    5550 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:54.426624    5550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:54.426757    5550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:54.427790    5550 out.go:303] Setting JSON to false
	I1003 17:40:54.443800    5550 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2428,"bootTime":1696377626,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:54.443866    5550 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:54.447607    5550 out.go:177] * [embed-certs-391000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:54.457528    5550 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:54.453587    5550 notify.go:220] Checking for updates...
	I1003 17:40:54.465526    5550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:54.472571    5550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:54.480544    5550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:54.488517    5550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:54.496559    5550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:54.500838    5550 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:54.500906    5550 config.go:182] Loaded profile config "no-preload-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:54.500957    5550 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:54.505564    5550 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:40:54.514549    5550 start.go:298] selected driver: qemu2
	I1003 17:40:54.514555    5550 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:40:54.514561    5550 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:54.516853    5550 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:40:54.520612    5550 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:40:54.522104    5550 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:54.522133    5550 cni.go:84] Creating CNI manager for ""
	I1003 17:40:54.522142    5550 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:40:54.522146    5550 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:40:54.522153    5550 start_flags.go:321] config:
	{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-391000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:54.526863    5550 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:54.534555    5550 out.go:177] * Starting control plane node embed-certs-391000 in cluster embed-certs-391000
	I1003 17:40:54.538534    5550 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:54.538547    5550 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:40:54.538555    5550 cache.go:57] Caching tarball of preloaded images
	I1003 17:40:54.538611    5550 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:40:54.538617    5550 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:40:54.538689    5550 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/embed-certs-391000/config.json ...
	I1003 17:40:54.538700    5550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/embed-certs-391000/config.json: {Name:mkec212db7c6f87404b9777a1fde544f25333fdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:40:54.538921    5550 start.go:365] acquiring machines lock for embed-certs-391000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:56.057754    5550 start.go:369] acquired machines lock for "embed-certs-391000" in 1.518837709s
	I1003 17:40:56.057950    5550 start.go:93] Provisioning new machine with config: &{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-391000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:40:56.058208    5550 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:40:56.074392    5550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:40:56.125143    5550 start.go:159] libmachine.API.Create for "embed-certs-391000" (driver="qemu2")
	I1003 17:40:56.125185    5550 client.go:168] LocalClient.Create starting
	I1003 17:40:56.125290    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:40:56.125338    5550 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:56.125363    5550 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:56.125425    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:40:56.125459    5550 main.go:141] libmachine: Decoding PEM data...
	I1003 17:40:56.125473    5550 main.go:141] libmachine: Parsing certificate...
	I1003 17:40:56.126058    5550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:40:56.258364    5550 main.go:141] libmachine: Creating SSH key...
	I1003 17:40:56.469821    5550 main.go:141] libmachine: Creating Disk image...
	I1003 17:40:56.469832    5550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:40:56.470005    5550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:40:56.479725    5550 main.go:141] libmachine: STDOUT: 
	I1003 17:40:56.479753    5550 main.go:141] libmachine: STDERR: 
	I1003 17:40:56.479814    5550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2 +20000M
	I1003 17:40:56.490676    5550 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:40:56.490692    5550 main.go:141] libmachine: STDERR: 
	I1003 17:40:56.490716    5550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:40:56.490725    5550 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:40:56.490765    5550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:14:d1:91:e6:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:40:56.492562    5550 main.go:141] libmachine: STDOUT: 
	I1003 17:40:56.492578    5550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:56.492602    5550 client.go:171] LocalClient.Create took 367.416ms
	I1003 17:40:58.494803    5550 start.go:128] duration metric: createHost completed in 2.43660825s
	I1003 17:40:58.494879    5550 start.go:83] releasing machines lock for "embed-certs-391000", held for 2.4371405s
	W1003 17:40:58.494925    5550 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:58.512340    5550 out.go:177] * Deleting "embed-certs-391000" in qemu2 ...
	W1003 17:40:58.534580    5550 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:58.534610    5550 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:03.536243    5550 start.go:365] acquiring machines lock for embed-certs-391000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:03.554613    5550 start.go:369] acquired machines lock for "embed-certs-391000" in 18.275ms
	I1003 17:41:03.554676    5550 start.go:93] Provisioning new machine with config: &{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-391000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:41:03.554888    5550 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:41:03.563017    5550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:41:03.607812    5550 start.go:159] libmachine.API.Create for "embed-certs-391000" (driver="qemu2")
	I1003 17:41:03.607846    5550 client.go:168] LocalClient.Create starting
	I1003 17:41:03.607942    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:41:03.608000    5550 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:03.608025    5550 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:03.608085    5550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:41:03.608119    5550 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:03.608133    5550 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:03.608597    5550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:41:03.736968    5550 main.go:141] libmachine: Creating SSH key...
	I1003 17:41:03.919258    5550 main.go:141] libmachine: Creating Disk image...
	I1003 17:41:03.919272    5550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:41:03.919459    5550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:41:03.929083    5550 main.go:141] libmachine: STDOUT: 
	I1003 17:41:03.929100    5550 main.go:141] libmachine: STDERR: 
	I1003 17:41:03.929158    5550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2 +20000M
	I1003 17:41:03.937547    5550 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:41:03.937563    5550 main.go:141] libmachine: STDERR: 
	I1003 17:41:03.937579    5550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:41:03.937586    5550 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:41:03.937631    5550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3e:53:19:41:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:41:03.939583    5550 main.go:141] libmachine: STDOUT: 
	I1003 17:41:03.939596    5550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:03.939612    5550 client.go:171] LocalClient.Create took 331.767375ms
	I1003 17:41:05.941957    5550 start.go:128] duration metric: createHost completed in 2.387005083s
	I1003 17:41:05.942046    5550 start.go:83] releasing machines lock for "embed-certs-391000", held for 2.387444833s
	W1003 17:41:05.942405    5550 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:05.953922    5550 out.go:177] 
	W1003 17:41:05.959029    5550 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:05.959066    5550 out.go:239] * 
	* 
	W1003 17:41:05.962030    5550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:05.970903    5550 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (50.496375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-387000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-387000 create -f testdata/busybox.yaml: exit status 1 (30.957542ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-387000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (31.477792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (34.419333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-387000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-387000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-387000 describe deploy/metrics-server -n kube-system: exit status 1 (27.318292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-387000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-387000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (29.092958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (7.074655291s)

                                                
                                                
-- stdout --
	* [no-preload-387000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-387000 in cluster no-preload-387000
	* Restarting existing qemu2 VM for "no-preload-387000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-387000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:40:56.543795    5580 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:40:56.543944    5580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:56.543947    5580 out.go:309] Setting ErrFile to fd 2...
	I1003 17:40:56.543949    5580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:40:56.544074    5580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:40:56.545055    5580 out.go:303] Setting JSON to false
	I1003 17:40:56.561241    5580 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2430,"bootTime":1696377626,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:40:56.561336    5580 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:40:56.565690    5580 out.go:177] * [no-preload-387000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:40:56.573658    5580 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:40:56.577566    5580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:40:56.573759    5580 notify.go:220] Checking for updates...
	I1003 17:40:56.583595    5580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:40:56.586569    5580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:40:56.589609    5580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:40:56.592629    5580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:40:56.594347    5580 config.go:182] Loaded profile config "no-preload-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:40:56.594608    5580 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:40:56.598616    5580 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:40:56.605454    5580 start.go:298] selected driver: qemu2
	I1003 17:40:56.605460    5580 start.go:902] validating driver "qemu2" against &{Name:no-preload-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:56.605508    5580 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:40:56.607870    5580 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:40:56.607896    5580 cni.go:84] Creating CNI manager for ""
	I1003 17:40:56.607904    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:40:56.607911    5580 start_flags.go:321] config:
	{Name:no-preload-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-387000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:40:56.612300    5580 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.619623    5580 out.go:177] * Starting control plane node no-preload-387000 in cluster no-preload-387000
	I1003 17:40:56.623528    5580 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:40:56.623587    5580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/no-preload-387000/config.json ...
	I1003 17:40:56.623627    5580 cache.go:107] acquiring lock: {Name:mk2d73d400828c89b21b23d1287203b88a4ce158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623637    5580 cache.go:107] acquiring lock: {Name:mka1cadac3ebecf1c9f0651f202b5f351e41005c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623673    5580 cache.go:107] acquiring lock: {Name:mk41e89125d87e99d3392f0309bfa67012430e6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623677    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I1003 17:40:56.623684    5580 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 60.416µs
	I1003 17:40:56.623690    5580 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I1003 17:40:56.623695    5580 cache.go:107] acquiring lock: {Name:mk9c954df8c40eab9e63135c299422ee7d4595c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623703    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 17:40:56.623712    5580 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.833µs
	I1003 17:40:56.623717    5580 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 17:40:56.623726    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I1003 17:40:56.623725    5580 cache.go:107] acquiring lock: {Name:mkfa90c45295cc810cd1d94ded01058123307481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623730    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I1003 17:40:56.623729    5580 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 34.583µs
	I1003 17:40:56.623737    5580 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I1003 17:40:56.623737    5580 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 91.459µs
	I1003 17:40:56.623742    5580 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I1003 17:40:56.623741    5580 cache.go:107] acquiring lock: {Name:mk45f4043317dbf23f99ddc39c1b6a2bf7c4986f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623752    5580 cache.go:107] acquiring lock: {Name:mk414acf30c2723183de6339d91dcdffba31737d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623765    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I1003 17:40:56.623770    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1003 17:40:56.623769    5580 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 44.416µs
	I1003 17:40:56.623774    5580 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 33µs
	I1003 17:40:56.623778    5580 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1003 17:40:56.623776    5580 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I1003 17:40:56.623785    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I1003 17:40:56.623790    5580 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 39.125µs
	I1003 17:40:56.623795    5580 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I1003 17:40:56.623783    5580 cache.go:107] acquiring lock: {Name:mkae1f3bac64dea55d82d62f9c453ca6652b523c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:40:56.623851    5580 cache.go:115] /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1003 17:40:56.623855    5580 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 105.166µs
	I1003 17:40:56.623859    5580 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1003 17:40:56.623864    5580 cache.go:87] Successfully saved all images to host disk.
	I1003 17:40:56.623888    5580 start.go:365] acquiring machines lock for no-preload-387000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:40:58.495010    5580 start.go:369] acquired machines lock for "no-preload-387000" in 1.871115667s
	I1003 17:40:58.495128    5580 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:40:58.495166    5580 fix.go:54] fixHost starting: 
	I1003 17:40:58.495798    5580 fix.go:102] recreateIfNeeded on no-preload-387000: state=Stopped err=<nil>
	W1003 17:40:58.495857    5580 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:40:58.504345    5580 out.go:177] * Restarting existing qemu2 VM for "no-preload-387000" ...
	I1003 17:40:58.515685    5580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c9:8d:2a:f1:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:40:58.526048    5580 main.go:141] libmachine: STDOUT: 
	I1003 17:40:58.526120    5580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:40:58.526294    5580 fix.go:56] fixHost completed within 31.123333ms
	I1003 17:40:58.526319    5580 start.go:83] releasing machines lock for "no-preload-387000", held for 31.281041ms
	W1003 17:40:58.526363    5580 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:40:58.526540    5580 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:40:58.526565    5580 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:03.528877    5580 start.go:365] acquiring machines lock for no-preload-387000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:03.529368    5580 start.go:369] acquired machines lock for "no-preload-387000" in 390.5µs
	I1003 17:41:03.529532    5580 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:03.529553    5580 fix.go:54] fixHost starting: 
	I1003 17:41:03.530281    5580 fix.go:102] recreateIfNeeded on no-preload-387000: state=Stopped err=<nil>
	W1003 17:41:03.530309    5580 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:03.535990    5580 out.go:177] * Restarting existing qemu2 VM for "no-preload-387000" ...
	I1003 17:41:03.544249    5580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c9:8d:2a:f1:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/no-preload-387000/disk.qcow2
	I1003 17:41:03.554378    5580 main.go:141] libmachine: STDOUT: 
	I1003 17:41:03.554421    5580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:03.554504    5580 fix.go:56] fixHost completed within 24.952917ms
	I1003 17:41:03.554525    5580 start.go:83] releasing machines lock for "no-preload-387000", held for 25.135375ms
	W1003 17:41:03.554770    5580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-387000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-387000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:03.566907    5580 out.go:177] 
	W1003 17:41:03.570204    5580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:03.570241    5580 out.go:239] * 
	* 
	W1003 17:41:03.572249    5580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:03.582979    5580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (47.693291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-387000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (33.891625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-387000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-387000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-387000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.431125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-387000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-387000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (33.044041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-387000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-387000 "sudo crictl images -o json": exit status 89 (40.198125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-387000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-387000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-387000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (29.104208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-387000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-387000 --alsologtostderr -v=1: exit status 89 (38.4375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-387000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:03.836198    5600 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:03.836380    5600 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:03.836383    5600 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:03.836386    5600 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:03.836535    5600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:03.836750    5600 out.go:303] Setting JSON to false
	I1003 17:41:03.836759    5600 mustload.go:65] Loading cluster: no-preload-387000
	I1003 17:41:03.836966    5600 config.go:182] Loaded profile config "no-preload-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:03.839966    5600 out.go:177] * The control plane node must be running for this command
	I1003 17:41:03.842958    5600 out.go:177]   To start a cluster, run: "minikube start -p no-preload-387000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-387000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (28.649084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (28.683167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-387000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-776000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-776000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.205979458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-776000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-776000 in cluster default-k8s-diff-port-776000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-776000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:04.539712    5638 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:04.539872    5638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:04.539875    5638 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:04.539878    5638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:04.540011    5638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:04.541043    5638 out.go:303] Setting JSON to false
	I1003 17:41:04.557044    5638 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2438,"bootTime":1696377626,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:41:04.557144    5638 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:41:04.560919    5638 out.go:177] * [default-k8s-diff-port-776000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:41:04.567986    5638 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:41:04.570915    5638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:41:04.568048    5638 notify.go:220] Checking for updates...
	I1003 17:41:04.576920    5638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:41:04.583900    5638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:41:04.587942    5638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:41:04.595913    5638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:41:04.600229    5638 config.go:182] Loaded profile config "embed-certs-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:04.600302    5638 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:04.600359    5638 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:41:04.603920    5638 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:41:04.610769    5638 start.go:298] selected driver: qemu2
	I1003 17:41:04.610777    5638 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:41:04.610785    5638 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:41:04.613245    5638 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:41:04.617866    5638 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:41:04.621017    5638 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:41:04.621045    5638 cni.go:84] Creating CNI manager for ""
	I1003 17:41:04.621063    5638 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:41:04.621067    5638 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:41:04.621072    5638 start_flags.go:321] config:
	{Name:default-k8s-diff-port-776000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:04.625901    5638 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:41:04.636947    5638 out.go:177] * Starting control plane node default-k8s-diff-port-776000 in cluster default-k8s-diff-port-776000
	I1003 17:41:04.640965    5638 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:41:04.640990    5638 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:41:04.641004    5638 cache.go:57] Caching tarball of preloaded images
	I1003 17:41:04.641070    5638 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:41:04.641075    5638 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:41:04.641155    5638 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/default-k8s-diff-port-776000/config.json ...
	I1003 17:41:04.641166    5638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/default-k8s-diff-port-776000/config.json: {Name:mk2e744e24e6cfb013501c179108a010445ea6c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:41:04.641398    5638 start.go:365] acquiring machines lock for default-k8s-diff-port-776000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:05.942146    5638 start.go:369] acquired machines lock for "default-k8s-diff-port-776000" in 1.300746s
	I1003 17:41:05.942379    5638 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:41:05.942613    5638 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:41:05.949925    5638 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:41:05.999453    5638 start.go:159] libmachine.API.Create for "default-k8s-diff-port-776000" (driver="qemu2")
	I1003 17:41:05.999508    5638 client.go:168] LocalClient.Create starting
	I1003 17:41:05.999613    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:41:05.999676    5638 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:05.999695    5638 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:05.999761    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:41:05.999794    5638 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:05.999809    5638 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:06.000431    5638 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:41:06.130651    5638 main.go:141] libmachine: Creating SSH key...
	I1003 17:41:06.267062    5638 main.go:141] libmachine: Creating Disk image...
	I1003 17:41:06.267072    5638 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:41:06.267238    5638 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:06.291797    5638 main.go:141] libmachine: STDOUT: 
	I1003 17:41:06.291816    5638 main.go:141] libmachine: STDERR: 
	I1003 17:41:06.291897    5638 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2 +20000M
	I1003 17:41:06.300204    5638 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:41:06.300235    5638 main.go:141] libmachine: STDERR: 
	I1003 17:41:06.300257    5638 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:06.300267    5638 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:41:06.300299    5638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:59:7b:76:9d:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:06.302129    5638 main.go:141] libmachine: STDOUT: 
	I1003 17:41:06.302144    5638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:06.302162    5638 client.go:171] LocalClient.Create took 302.652292ms
	I1003 17:41:08.304326    5638 start.go:128] duration metric: createHost completed in 2.36172675s
	I1003 17:41:08.304403    5638 start.go:83] releasing machines lock for "default-k8s-diff-port-776000", held for 2.3622705s
	W1003 17:41:08.304495    5638 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:08.320056    5638 out.go:177] * Deleting "default-k8s-diff-port-776000" in qemu2 ...
	W1003 17:41:08.342873    5638 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:08.342904    5638 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:13.343043    5638 start.go:365] acquiring machines lock for default-k8s-diff-port-776000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:13.361775    5638 start.go:369] acquired machines lock for "default-k8s-diff-port-776000" in 18.633625ms
	I1003 17:41:13.361844    5638 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:41:13.362126    5638 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:41:13.373856    5638 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:41:13.421491    5638 start.go:159] libmachine.API.Create for "default-k8s-diff-port-776000" (driver="qemu2")
	I1003 17:41:13.421530    5638 client.go:168] LocalClient.Create starting
	I1003 17:41:13.421658    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:41:13.421719    5638 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:13.421740    5638 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:13.421800    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:41:13.421834    5638 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:13.421852    5638 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:13.422354    5638 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:41:13.553147    5638 main.go:141] libmachine: Creating SSH key...
	I1003 17:41:13.651420    5638 main.go:141] libmachine: Creating Disk image...
	I1003 17:41:13.651427    5638 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:41:13.651583    5638 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:13.664205    5638 main.go:141] libmachine: STDOUT: 
	I1003 17:41:13.664220    5638 main.go:141] libmachine: STDERR: 
	I1003 17:41:13.664289    5638 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2 +20000M
	I1003 17:41:13.672627    5638 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:41:13.672648    5638 main.go:141] libmachine: STDERR: 
	I1003 17:41:13.672670    5638 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:13.672682    5638 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:41:13.672720    5638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:1a:16:87:3b:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:13.674859    5638 main.go:141] libmachine: STDOUT: 
	I1003 17:41:13.674881    5638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:13.674899    5638 client.go:171] LocalClient.Create took 253.369917ms
	I1003 17:41:15.677110    5638 start.go:128] duration metric: createHost completed in 2.314963333s
	I1003 17:41:15.677195    5638 start.go:83] releasing machines lock for "default-k8s-diff-port-776000", held for 2.315438542s
	W1003 17:41:15.677517    5638 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:15.691195    5638 out.go:177] 
	W1003 17:41:15.695221    5638 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:15.695260    5638 out.go:239] * 
	* 
	W1003 17:41:15.698153    5638 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:15.705152    5638 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-776000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (49.926375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-391000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-391000 create -f testdata/busybox.yaml: exit status 1 (30.998042ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-391000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (32.686375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (32.812166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-391000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-391000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-391000 describe deploy/metrics-server -n kube-system: exit status 1 (28.070834ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-391000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (29.074583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (7.000221375s)

                                                
                                                
-- stdout --
	* [embed-certs-391000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-391000 in cluster embed-certs-391000
	* Restarting existing qemu2 VM for "embed-certs-391000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-391000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:06.423499    5667 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:06.423637    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:06.423640    5667 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:06.423643    5667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:06.423775    5667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:06.424751    5667 out.go:303] Setting JSON to false
	I1003 17:41:06.440628    5667 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2440,"bootTime":1696377626,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:41:06.440709    5667 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:41:06.445872    5667 out.go:177] * [embed-certs-391000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:41:06.452956    5667 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:41:06.453019    5667 notify.go:220] Checking for updates...
	I1003 17:41:06.456863    5667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:41:06.459843    5667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:41:06.462898    5667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:41:06.465864    5667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:41:06.468865    5667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:41:06.472125    5667 config.go:182] Loaded profile config "embed-certs-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:06.472405    5667 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:41:06.476864    5667 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:41:06.483885    5667 start.go:298] selected driver: qemu2
	I1003 17:41:06.483893    5667 start.go:902] validating driver "qemu2" against &{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-391000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:06.483960    5667 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:41:06.486253    5667 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:41:06.486278    5667 cni.go:84] Creating CNI manager for ""
	I1003 17:41:06.486285    5667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:41:06.486290    5667 start_flags.go:321] config:
	{Name:embed-certs-391000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-391000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:06.490446    5667 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:41:06.497869    5667 out.go:177] * Starting control plane node embed-certs-391000 in cluster embed-certs-391000
	I1003 17:41:06.501917    5667 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:41:06.501935    5667 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:41:06.501949    5667 cache.go:57] Caching tarball of preloaded images
	I1003 17:41:06.502002    5667 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:41:06.502009    5667 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:41:06.502068    5667 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/embed-certs-391000/config.json ...
	I1003 17:41:06.502474    5667 start.go:365] acquiring machines lock for embed-certs-391000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:08.304583    5667 start.go:369] acquired machines lock for "embed-certs-391000" in 1.802117584s
	I1003 17:41:08.304706    5667 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:08.304750    5667 fix.go:54] fixHost starting: 
	I1003 17:41:08.305410    5667 fix.go:102] recreateIfNeeded on embed-certs-391000: state=Stopped err=<nil>
	W1003 17:41:08.305471    5667 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:08.312600    5667 out.go:177] * Restarting existing qemu2 VM for "embed-certs-391000" ...
	I1003 17:41:08.324097    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3e:53:19:41:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:41:08.334279    5667 main.go:141] libmachine: STDOUT: 
	I1003 17:41:08.334347    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:08.334458    5667 fix.go:56] fixHost completed within 29.704958ms
	I1003 17:41:08.334479    5667 start.go:83] releasing machines lock for "embed-certs-391000", held for 29.872333ms
	W1003 17:41:08.334503    5667 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:08.334651    5667 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:08.334670    5667 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:13.336868    5667 start.go:365] acquiring machines lock for embed-certs-391000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:13.337311    5667 start.go:369] acquired machines lock for "embed-certs-391000" in 324µs
	I1003 17:41:13.337453    5667 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:13.337475    5667 fix.go:54] fixHost starting: 
	I1003 17:41:13.338167    5667 fix.go:102] recreateIfNeeded on embed-certs-391000: state=Stopped err=<nil>
	W1003 17:41:13.338192    5667 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:13.346953    5667 out.go:177] * Restarting existing qemu2 VM for "embed-certs-391000" ...
	I1003 17:41:13.351924    5667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3e:53:19:41:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/embed-certs-391000/disk.qcow2
	I1003 17:41:13.361507    5667 main.go:141] libmachine: STDOUT: 
	I1003 17:41:13.361572    5667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:13.361650    5667 fix.go:56] fixHost completed within 24.177042ms
	I1003 17:41:13.361676    5667 start.go:83] releasing machines lock for "embed-certs-391000", held for 24.337958ms
	W1003 17:41:13.361910    5667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-391000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:13.373859    5667 out.go:177] 
	W1003 17:41:13.378025    5667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:13.378084    5667 out.go:239] * 
	* 
	W1003 17:41:13.380416    5667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:13.385904    5667 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-391000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (54.428833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-391000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (33.045708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-391000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.008208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (32.762792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-391000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-391000 "sudo crictl images -o json": exit status 89 (40.084875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-391000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-391000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-391000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (28.698917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-391000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-391000 --alsologtostderr -v=1: exit status 89 (45.039625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-391000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:13.645925    5690 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:13.646129    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:13.646132    5690 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:13.646135    5690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:13.646272    5690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:13.646481    5690 out.go:303] Setting JSON to false
	I1003 17:41:13.646496    5690 mustload.go:65] Loading cluster: embed-certs-391000
	I1003 17:41:13.646697    5690 config.go:182] Loaded profile config "embed-certs-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:13.650962    5690 out.go:177] * The control plane node must be running for this command
	I1003 17:41:13.658859    5690 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-391000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-391000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (27.994667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (27.923166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-062000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-062000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (11.324308083s)

                                                
                                                
-- stdout --
	* [newest-cni-062000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-062000 in cluster newest-cni-062000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-062000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:14.111240    5716 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:14.111402    5716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:14.111406    5716 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:14.111409    5716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:14.111554    5716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:14.112607    5716 out.go:303] Setting JSON to false
	I1003 17:41:14.128852    5716 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2448,"bootTime":1696377626,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:41:14.128926    5716 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:41:14.132417    5716 out.go:177] * [newest-cni-062000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:41:14.138303    5716 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:41:14.142309    5716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:41:14.138417    5716 notify.go:220] Checking for updates...
	I1003 17:41:14.148279    5716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:41:14.151297    5716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:41:14.152773    5716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:41:14.156258    5716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:41:14.159635    5716 config.go:182] Loaded profile config "default-k8s-diff-port-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:14.159698    5716 config.go:182] Loaded profile config "multinode-609000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:14.159744    5716 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:41:14.164128    5716 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 17:41:14.171245    5716 start.go:298] selected driver: qemu2
	I1003 17:41:14.171252    5716 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:41:14.171258    5716 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:41:14.173696    5716 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W1003 17:41:14.173718    5716 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1003 17:41:14.181313    5716 out.go:177] * Automatically selected the socket_vmnet network
	I1003 17:41:14.185368    5716 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 17:41:14.185407    5716 cni.go:84] Creating CNI manager for ""
	I1003 17:41:14.185415    5716 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:41:14.185420    5716 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:41:14.185423    5716 start_flags.go:321] config:
	{Name:newest-cni-062000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-062000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:14.190013    5716 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:41:14.198271    5716 out.go:177] * Starting control plane node newest-cni-062000 in cluster newest-cni-062000
	I1003 17:41:14.202297    5716 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:41:14.202313    5716 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:41:14.202329    5716 cache.go:57] Caching tarball of preloaded images
	I1003 17:41:14.202385    5716 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:41:14.202391    5716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:41:14.202467    5716 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/newest-cni-062000/config.json ...
	I1003 17:41:14.202478    5716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/newest-cni-062000/config.json: {Name:mk4a010536e8c5b73efe3e9f13e4e99b31c03602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:41:14.202690    5716 start.go:365] acquiring machines lock for newest-cni-062000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:15.677370    5716 start.go:369] acquired machines lock for "newest-cni-062000" in 1.474680375s
	I1003 17:41:15.677549    5716 start.go:93] Provisioning new machine with config: &{Name:newest-cni-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-062000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:41:15.677790    5716 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:41:15.687148    5716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:41:15.736264    5716 start.go:159] libmachine.API.Create for "newest-cni-062000" (driver="qemu2")
	I1003 17:41:15.736315    5716 client.go:168] LocalClient.Create starting
	I1003 17:41:15.736457    5716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:41:15.736510    5716 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:15.736532    5716 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:15.736591    5716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:41:15.736625    5716 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:15.736640    5716 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:15.737255    5716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:41:15.866035    5716 main.go:141] libmachine: Creating SSH key...
	I1003 17:41:15.959938    5716 main.go:141] libmachine: Creating Disk image...
	I1003 17:41:15.959951    5716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:41:15.960133    5716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:15.969696    5716 main.go:141] libmachine: STDOUT: 
	I1003 17:41:15.969720    5716 main.go:141] libmachine: STDERR: 
	I1003 17:41:15.969787    5716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2 +20000M
	I1003 17:41:15.978190    5716 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:41:15.978208    5716 main.go:141] libmachine: STDERR: 
	I1003 17:41:15.978227    5716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:15.978240    5716 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:41:15.978279    5716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:48:65:fb:03:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:15.980112    5716 main.go:141] libmachine: STDOUT: 
	I1003 17:41:15.980128    5716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:15.980152    5716 client.go:171] LocalClient.Create took 243.834375ms
	I1003 17:41:17.982349    5716 start.go:128] duration metric: createHost completed in 2.304560833s
	I1003 17:41:17.982442    5716 start.go:83] releasing machines lock for "newest-cni-062000", held for 2.305078459s
	W1003 17:41:17.982495    5716 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:17.996083    5716 out.go:177] * Deleting "newest-cni-062000" in qemu2 ...
	W1003 17:41:18.018911    5716 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:18.018940    5716 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:23.021048    5716 start.go:365] acquiring machines lock for newest-cni-062000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:23.035579    5716 start.go:369] acquired machines lock for "newest-cni-062000" in 14.456417ms
	I1003 17:41:23.035633    5716 start.go:93] Provisioning new machine with config: &{Name:newest-cni-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-062000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 17:41:23.035852    5716 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 17:41:23.046940    5716 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 17:41:23.090400    5716 start.go:159] libmachine.API.Create for "newest-cni-062000" (driver="qemu2")
	I1003 17:41:23.090435    5716 client.go:168] LocalClient.Create starting
	I1003 17:41:23.090535    5716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/ca.pem
	I1003 17:41:23.090597    5716 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:23.090623    5716 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:23.090683    5716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17345-986/.minikube/certs/cert.pem
	I1003 17:41:23.090718    5716 main.go:141] libmachine: Decoding PEM data...
	I1003 17:41:23.090730    5716 main.go:141] libmachine: Parsing certificate...
	I1003 17:41:23.091243    5716 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17345-986/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso...
	I1003 17:41:23.223407    5716 main.go:141] libmachine: Creating SSH key...
	I1003 17:41:23.347185    5716 main.go:141] libmachine: Creating Disk image...
	I1003 17:41:23.347198    5716 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 17:41:23.347366    5716 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2.raw /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:23.356722    5716 main.go:141] libmachine: STDOUT: 
	I1003 17:41:23.356747    5716 main.go:141] libmachine: STDERR: 
	I1003 17:41:23.356801    5716 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2 +20000M
	I1003 17:41:23.365385    5716 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 17:41:23.365404    5716 main.go:141] libmachine: STDERR: 
	I1003 17:41:23.365419    5716 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:23.365427    5716 main.go:141] libmachine: Starting QEMU VM...
	I1003 17:41:23.365467    5716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:21:73:34:08:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:23.367327    5716 main.go:141] libmachine: STDOUT: 
	I1003 17:41:23.367342    5716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:23.367354    5716 client.go:171] LocalClient.Create took 276.9205ms
	I1003 17:41:25.369570    5716 start.go:128] duration metric: createHost completed in 2.333719958s
	I1003 17:41:25.369660    5716 start.go:83] releasing machines lock for "newest-cni-062000", held for 2.334100625s
	W1003 17:41:25.370189    5716 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-062000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-062000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:25.379761    5716 out.go:177] 
	W1003 17:41:25.383906    5716 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:25.383939    5716 out.go:239] * 
	* 
	W1003 17:41:25.386547    5716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:25.395681    5716 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-062000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000: exit status 7 (66.71325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-776000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-776000 create -f testdata/busybox.yaml: exit status 1 (30.953833ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-776000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (33.1705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (32.702833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-776000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-776000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-776000 describe deploy/metrics-server -n kube-system: exit status 1 (27.801625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-776000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-776000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (29.005791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-776000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-776000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (6.939598333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-776000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-776000 in cluster default-k8s-diff-port-776000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-776000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:16.155952    5748 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:16.156078    5748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:16.156081    5748 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:16.156084    5748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:16.156212    5748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:16.157151    5748 out.go:303] Setting JSON to false
	I1003 17:41:16.173215    5748 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2450,"bootTime":1696377626,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:41:16.173284    5748 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:41:16.177153    5748 out.go:177] * [default-k8s-diff-port-776000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:41:16.184113    5748 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:41:16.188093    5748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:41:16.184161    5748 notify.go:220] Checking for updates...
	I1003 17:41:16.194083    5748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:41:16.197083    5748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:41:16.200132    5748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:41:16.203062    5748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:41:16.206418    5748 config.go:182] Loaded profile config "default-k8s-diff-port-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:16.206684    5748 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:41:16.211106    5748 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:41:16.218040    5748 start.go:298] selected driver: qemu2
	I1003 17:41:16.218047    5748 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-776000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:16.218103    5748 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:41:16.220574    5748 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:41:16.220598    5748 cni.go:84] Creating CNI manager for ""
	I1003 17:41:16.220606    5748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:41:16.220611    5748 start_flags.go:321] config:
	{Name:default-k8s-diff-port-776000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-7760
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:16.224926    5748 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:41:16.232079    5748 out.go:177] * Starting control plane node default-k8s-diff-port-776000 in cluster default-k8s-diff-port-776000
	I1003 17:41:16.236098    5748 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:41:16.236119    5748 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:41:16.236128    5748 cache.go:57] Caching tarball of preloaded images
	I1003 17:41:16.236185    5748 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:41:16.236190    5748 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:41:16.236261    5748 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/default-k8s-diff-port-776000/config.json ...
	I1003 17:41:16.236603    5748 start.go:365] acquiring machines lock for default-k8s-diff-port-776000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:17.982603    5748 start.go:369] acquired machines lock for "default-k8s-diff-port-776000" in 1.74596625s
	I1003 17:41:17.982721    5748 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:17.982764    5748 fix.go:54] fixHost starting: 
	I1003 17:41:17.983460    5748 fix.go:102] recreateIfNeeded on default-k8s-diff-port-776000: state=Stopped err=<nil>
	W1003 17:41:17.983505    5748 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:17.992133    5748 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-776000" ...
	I1003 17:41:18.000152    5748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:1a:16:87:3b:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:18.010162    5748 main.go:141] libmachine: STDOUT: 
	I1003 17:41:18.010218    5748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:18.010326    5748 fix.go:56] fixHost completed within 27.568125ms
	I1003 17:41:18.010347    5748 start.go:83] releasing machines lock for "default-k8s-diff-port-776000", held for 27.684083ms
	W1003 17:41:18.010375    5748 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:18.010555    5748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:18.010572    5748 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:23.012828    5748 start.go:365] acquiring machines lock for default-k8s-diff-port-776000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:23.013345    5748 start.go:369] acquired machines lock for "default-k8s-diff-port-776000" in 404.333µs
	I1003 17:41:23.013495    5748 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:23.013516    5748 fix.go:54] fixHost starting: 
	I1003 17:41:23.014232    5748 fix.go:102] recreateIfNeeded on default-k8s-diff-port-776000: state=Stopped err=<nil>
	W1003 17:41:23.014259    5748 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:23.021018    5748 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-776000" ...
	I1003 17:41:23.025314    5748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:1a:16:87:3b:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/default-k8s-diff-port-776000/disk.qcow2
	I1003 17:41:23.035291    5748 main.go:141] libmachine: STDOUT: 
	I1003 17:41:23.035339    5748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:23.035457    5748 fix.go:56] fixHost completed within 21.942583ms
	I1003 17:41:23.035512    5748 start.go:83] releasing machines lock for "default-k8s-diff-port-776000", held for 22.144542ms
	W1003 17:41:23.035657    5748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-776000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:23.046912    5748 out.go:177] 
	W1003 17:41:23.050998    5748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:23.051039    5748 out.go:239] * 
	* 
	W1003 17:41:23.053006    5748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:23.061906    5748 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-776000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (50.527125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-776000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (34.271583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-776000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-776000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-776000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.714625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-776000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-776000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (32.530834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-776000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-776000 "sudo crictl images -o json": exit status 89 (42.550333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-776000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-776000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-776000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (29.00675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-776000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-776000 --alsologtostderr -v=1: exit status 89 (39.493708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-776000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:23.318252    5769 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:23.318428    5769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:23.318431    5769 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:23.318434    5769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:23.318574    5769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:23.318782    5769 out.go:303] Setting JSON to false
	I1003 17:41:23.318791    5769 mustload.go:65] Loading cluster: default-k8s-diff-port-776000
	I1003 17:41:23.318980    5769 config.go:182] Loaded profile config "default-k8s-diff-port-776000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:23.322943    5769 out.go:177] * The control plane node must be running for this command
	I1003 17:41:23.325961    5769 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-776000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-776000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (29.037167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (27.784708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-062000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-062000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.177758833s)

                                                
                                                
-- stdout --
	* [newest-cni-062000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-062000 in cluster newest-cni-062000
	* Restarting existing qemu2 VM for "newest-cni-062000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-062000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:25.722510    5805 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:25.722670    5805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:25.722673    5805 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:25.722676    5805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:25.722792    5805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:25.723761    5805 out.go:303] Setting JSON to false
	I1003 17:41:25.739803    5805 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2459,"bootTime":1696377626,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:41:25.739883    5805 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:41:25.744465    5805 out.go:177] * [newest-cni-062000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:41:25.751413    5805 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:41:25.755424    5805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:41:25.751483    5805 notify.go:220] Checking for updates...
	I1003 17:41:25.758424    5805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:41:25.761384    5805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:41:25.764412    5805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:41:25.767383    5805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:41:25.770708    5805 config.go:182] Loaded profile config "newest-cni-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:25.770948    5805 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:41:25.775408    5805 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:41:25.782388    5805 start.go:298] selected driver: qemu2
	I1003 17:41:25.782397    5805 start.go:902] validating driver "qemu2" against &{Name:newest-cni-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-062000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:25.782467    5805 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:41:25.784802    5805 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 17:41:25.784826    5805 cni.go:84] Creating CNI manager for ""
	I1003 17:41:25.784834    5805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:41:25.784839    5805 start_flags.go:321] config:
	{Name:newest-cni-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-062000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:41:25.789040    5805 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:41:25.794398    5805 out.go:177] * Starting control plane node newest-cni-062000 in cluster newest-cni-062000
	I1003 17:41:25.798386    5805 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:41:25.798401    5805 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:41:25.798406    5805 cache.go:57] Caching tarball of preloaded images
	I1003 17:41:25.798457    5805 preload.go:174] Found /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 17:41:25.798463    5805 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1003 17:41:25.798527    5805 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/newest-cni-062000/config.json ...
	I1003 17:41:25.798941    5805 start.go:365] acquiring machines lock for newest-cni-062000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:25.798976    5805 start.go:369] acquired machines lock for "newest-cni-062000" in 28µs
	I1003 17:41:25.798984    5805 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:25.798990    5805 fix.go:54] fixHost starting: 
	I1003 17:41:25.799113    5805 fix.go:102] recreateIfNeeded on newest-cni-062000: state=Stopped err=<nil>
	W1003 17:41:25.799124    5805 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:25.802383    5805 out.go:177] * Restarting existing qemu2 VM for "newest-cni-062000" ...
	I1003 17:41:25.809366    5805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:21:73:34:08:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:25.811305    5805 main.go:141] libmachine: STDOUT: 
	I1003 17:41:25.811322    5805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:25.811366    5805 fix.go:56] fixHost completed within 12.370125ms
	I1003 17:41:25.811371    5805 start.go:83] releasing machines lock for "newest-cni-062000", held for 12.391ms
	W1003 17:41:25.811377    5805 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:25.811402    5805 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:25.811407    5805 start.go:703] Will try again in 5 seconds ...
	I1003 17:41:30.813200    5805 start.go:365] acquiring machines lock for newest-cni-062000: {Name:mk16657683bb2d197cbddad244cc8b10f1cb12f0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:41:30.813642    5805 start.go:369] acquired machines lock for "newest-cni-062000" in 273.709µs
	I1003 17:41:30.813795    5805 start.go:96] Skipping create...Using existing machine configuration
	I1003 17:41:30.813815    5805 fix.go:54] fixHost starting: 
	I1003 17:41:30.814482    5805 fix.go:102] recreateIfNeeded on newest-cni-062000: state=Stopped err=<nil>
	W1003 17:41:30.814507    5805 fix.go:128] unexpected machine state, will restart: <nil>
	I1003 17:41:30.823911    5805 out.go:177] * Restarting existing qemu2 VM for "newest-cni-062000" ...
	I1003 17:41:30.829129    5805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:21:73:34:08:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17345-986/.minikube/machines/newest-cni-062000/disk.qcow2
	I1003 17:41:30.837619    5805 main.go:141] libmachine: STDOUT: 
	I1003 17:41:30.837674    5805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 17:41:30.837734    5805 fix.go:56] fixHost completed within 23.91975ms
	I1003 17:41:30.837757    5805 start.go:83] releasing machines lock for "newest-cni-062000", held for 24.094209ms
	W1003 17:41:30.837959    5805 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-062000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-062000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 17:41:30.846877    5805 out.go:177] 
	W1003 17:41:30.849951    5805 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 17:41:30.849976    5805 out.go:239] * 
	* 
	W1003 17:41:30.852447    5805 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:41:30.860910    5805 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-062000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000: exit status 7 (67.203375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-062000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-062000 "sudo crictl images -o json": exit status 89 (43.351625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-062000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-062000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-062000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000: exit status 7 (28.732291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-062000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-062000 --alsologtostderr -v=1: exit status 89 (40.775916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-062000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:41:31.041534    5822 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:41:31.041705    5822 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:31.041708    5822 out.go:309] Setting ErrFile to fd 2...
	I1003 17:41:31.041711    5822 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:41:31.041849    5822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:41:31.042072    5822 out.go:303] Setting JSON to false
	I1003 17:41:31.042080    5822 mustload.go:65] Loading cluster: newest-cni-062000
	I1003 17:41:31.042281    5822 config.go:182] Loaded profile config "newest-cni-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:41:31.046706    5822 out.go:177] * The control plane node must be running for this command
	I1003 17:41:31.050762    5822 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-062000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-062000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000: exit status 7 (29.039209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-062000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000: exit status 7 (28.491416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (144/256)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.2/json-events 6.54
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
19 TestBinaryMirror 0.35
22 TestAddons/Setup 405.44
27 TestAddons/parallel/MetricsServer 5.26
30 TestAddons/parallel/CSI 39.98
31 TestAddons/parallel/Headlamp 12.38
32 TestAddons/parallel/CloudSpanner 5.19
33 TestAddons/parallel/LocalPath 52.08
36 TestAddons/serial/GCPAuth/Namespaces 0.07
37 TestAddons/StoppedEnableDisable 12.27
45 TestHyperKitDriverInstallOrUpdate 8.94
48 TestErrorSpam/setup 30.33
49 TestErrorSpam/start 0.37
50 TestErrorSpam/status 0.25
51 TestErrorSpam/pause 0.67
52 TestErrorSpam/unpause 0.62
53 TestErrorSpam/stop 3.23
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 43.49
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 35.34
60 TestFunctional/serial/KubeContext 0.03
61 TestFunctional/serial/KubectlGetPods 0.04
64 TestFunctional/serial/CacheCmd/cache/add_remote 3.89
65 TestFunctional/serial/CacheCmd/cache/add_local 1.99
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
67 TestFunctional/serial/CacheCmd/cache/list 0.03
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
69 TestFunctional/serial/CacheCmd/cache/cache_reload 0.94
70 TestFunctional/serial/CacheCmd/cache/delete 0.07
71 TestFunctional/serial/MinikubeKubectlCmd 0.45
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
73 TestFunctional/serial/ExtraConfig 36.63
74 TestFunctional/serial/ComponentHealth 0.04
75 TestFunctional/serial/LogsCmd 0.67
76 TestFunctional/serial/LogsFileCmd 0.64
77 TestFunctional/serial/InvalidService 4.39
79 TestFunctional/parallel/ConfigCmd 0.2
80 TestFunctional/parallel/DashboardCmd 6.95
81 TestFunctional/parallel/DryRun 0.21
82 TestFunctional/parallel/InternationalLanguage 0.11
83 TestFunctional/parallel/StatusCmd 0.26
88 TestFunctional/parallel/AddonsCmd 0.12
89 TestFunctional/parallel/PersistentVolumeClaim 24.96
91 TestFunctional/parallel/SSHCmd 0.14
92 TestFunctional/parallel/CpCmd 0.28
94 TestFunctional/parallel/FileSync 0.07
95 TestFunctional/parallel/CertSync 0.54
99 TestFunctional/parallel/NodeLabels 0.04
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
103 TestFunctional/parallel/License 0.19
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
112 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
113 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
115 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
116 TestFunctional/parallel/ServiceCmd/List 0.28
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
119 TestFunctional/parallel/ServiceCmd/Format 0.1
120 TestFunctional/parallel/ServiceCmd/URL 0.11
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
122 TestFunctional/parallel/ProfileCmd/profile_list 0.15
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
124 TestFunctional/parallel/MountCmd/any-port 4.18
125 TestFunctional/parallel/MountCmd/specific-port 0.99
127 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/Version/components 0.19
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
133 TestFunctional/parallel/ImageCommands/ImageBuild 1.58
134 TestFunctional/parallel/ImageCommands/Setup 1.76
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.31
136 TestFunctional/parallel/DockerEnv/bash 0.38
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.53
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.57
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
146 TestFunctional/delete_addon-resizer_images 0.12
147 TestFunctional/delete_my-image_image 0.04
148 TestFunctional/delete_minikube_cached_images 0.04
152 TestImageBuild/serial/Setup 29.46
153 TestImageBuild/serial/NormalBuild 1.03
155 TestImageBuild/serial/BuildWithDockerIgnore 0.12
156 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
159 TestIngressAddonLegacy/StartLegacyK8sCluster 71.48
161 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.35
162 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.26
166 TestJSONOutput/start/Command 47.47
167 TestJSONOutput/start/Audit 0
169 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Command 0.28
173 TestJSONOutput/pause/Audit 0
175 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Command 0.23
179 TestJSONOutput/unpause/Audit 0
181 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/stop/Command 12.07
185 TestJSONOutput/stop/Audit 0
187 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
189 TestErrorJSONOutput 0.34
194 TestMainNoArgs 0.03
195 TestMinikubeProfile 60.33
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
256 TestNoKubernetes/serial/ProfileList 0.14
257 TestNoKubernetes/serial/Stop 0.06
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
277 TestStartStop/group/old-k8s-version/serial/Stop 0.06
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
288 TestStartStop/group/no-preload/serial/Stop 0.06
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
299 TestStartStop/group/embed-certs/serial/Stop 0.06
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 0.06
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-278000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-278000: exit status 85 (91.927334ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |          |
	|         | -p download-only-278000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:03:20
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:03:20.858615    1449 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:03:20.858796    1449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:20.858799    1449 out.go:309] Setting ErrFile to fd 2...
	I1003 17:03:20.858801    1449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:20.858958    1449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	W1003 17:03:20.859020    1449 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17345-986/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17345-986/.minikube/config/config.json: no such file or directory
	I1003 17:03:20.860141    1449 out.go:303] Setting JSON to true
	I1003 17:03:20.877628    1449 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":174,"bootTime":1696377626,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:03:20.877705    1449 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:03:20.885159    1449 out.go:97] [download-only-278000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:03:20.889091    1449 out.go:169] MINIKUBE_LOCATION=17345
	I1003 17:03:20.885275    1449 notify.go:220] Checking for updates...
	W1003 17:03:20.885303    1449 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 17:03:20.900972    1449 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:03:20.905061    1449 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:03:20.908145    1449 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:03:20.911073    1449 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	W1003 17:03:20.917063    1449 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:03:20.917261    1449 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:03:20.923036    1449 out.go:97] Using the qemu2 driver based on user configuration
	I1003 17:03:20.923042    1449 start.go:298] selected driver: qemu2
	I1003 17:03:20.923056    1449 start.go:902] validating driver "qemu2" against <nil>
	I1003 17:03:20.923123    1449 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1003 17:03:20.927054    1449 out.go:169] Automatically selected the socket_vmnet network
	I1003 17:03:20.934034    1449 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1003 17:03:20.934137    1449 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:03:20.934198    1449 cni.go:84] Creating CNI manager for ""
	I1003 17:03:20.934215    1449 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 17:03:20.934219    1449 start_flags.go:321] config:
	{Name:download-only-278000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-278000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:03:20.940506    1449 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:03:20.945030    1449 out.go:97] Downloading VM boot image ...
	I1003 17:03:20.945060    1449 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/iso/arm64/minikube-v1.31.0-1695060926-17240-arm64.iso
	I1003 17:03:25.224536    1449 out.go:97] Starting control plane node download-only-278000 in cluster download-only-278000
	I1003 17:03:25.224554    1449 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:03:25.280363    1449 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:03:25.280384    1449 cache.go:57] Caching tarball of preloaded images
	I1003 17:03:25.280517    1449 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:03:25.284733    1449 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1003 17:03:25.284739    1449 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:25.358847    1449 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1003 17:03:30.398297    1449 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:30.398467    1449 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:31.040587    1449 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1003 17:03:31.040782    1449 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/download-only-278000/config.json ...
	I1003 17:03:31.040798    1449 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/download-only-278000/config.json: {Name:mk5649223888d7fca3bc6155a452f90fb2c86f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:03:31.041029    1449 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1003 17:03:31.041176    1449 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1003 17:03:31.274286    1449 out.go:169] 
	W1003 17:03:31.278536    1449 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17345-986/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880 0x103c65880] Decompressors:map[bz2:0x14000677640 gz:0x14000677648 tar:0x14000677560 tar.bz2:0x140006775a0 tar.gz:0x140006775b0 tar.xz:0x140006775f0 tar.zst:0x14000677630 tbz2:0x140006775a0 tgz:0x140006775b0 txz:0x140006775f0 tzst:0x14000677630 xz:0x14000677650 zip:0x14000677670 zst:0x14000677658] Getters:map[file:0x140006f8900 http:0x1400017e7d0 https:0x1400017e870] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1003 17:03:31.278565    1449 out_reason.go:110] 
	W1003 17:03:31.285624    1449 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 17:03:31.289414    1449 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-278000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (6.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-278000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-278000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 : (6.538834084s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (6.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-278000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-278000: exit status 85 (75.171958ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |          |
	|         | -p download-only-278000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-278000 | jenkins | v1.31.2 | 03 Oct 23 17:03 PDT |          |
	|         | -p download-only-278000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/03 17:03:31
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:03:31.476985    1466 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:03:31.477137    1466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:31.477141    1466 out.go:309] Setting ErrFile to fd 2...
	I1003 17:03:31.477143    1466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:03:31.477269    1466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	W1003 17:03:31.477335    1466 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17345-986/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17345-986/.minikube/config/config.json: no such file or directory
	I1003 17:03:31.478236    1466 out.go:303] Setting JSON to true
	I1003 17:03:31.494183    1466 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":185,"bootTime":1696377626,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:03:31.494275    1466 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:03:31.498141    1466 out.go:97] [download-only-278000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:03:31.502081    1466 out.go:169] MINIKUBE_LOCATION=17345
	I1003 17:03:31.498224    1466 notify.go:220] Checking for updates...
	I1003 17:03:31.509126    1466 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:03:31.512142    1466 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:03:31.515055    1466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:03:31.518097    1466 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	W1003 17:03:31.524023    1466 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:03:31.524328    1466 config.go:182] Loaded profile config "download-only-278000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1003 17:03:31.524357    1466 start.go:810] api.Load failed for download-only-278000: filestore "download-only-278000": Docker machine "download-only-278000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1003 17:03:31.524425    1466 driver.go:373] Setting default libvirt URI to qemu:///system
	W1003 17:03:31.524441    1466 start.go:810] api.Load failed for download-only-278000: filestore "download-only-278000": Docker machine "download-only-278000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1003 17:03:31.528033    1466 out.go:97] Using the qemu2 driver based on existing profile
	I1003 17:03:31.528041    1466 start.go:298] selected driver: qemu2
	I1003 17:03:31.528045    1466 start.go:902] validating driver "qemu2" against &{Name:download-only-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-278000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:03:31.530411    1466 cni.go:84] Creating CNI manager for ""
	I1003 17:03:31.530422    1466 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 17:03:31.530429    1466 start_flags.go:321] config:
	{Name:download-only-278000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-278000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:03:31.534712    1466 iso.go:125] acquiring lock: {Name:mkcc00c41dbf3c669d3c57dcea55708ed569b7af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:03:31.538116    1466 out.go:97] Starting control plane node download-only-278000 in cluster download-only-278000
	I1003 17:03:31.538124    1466 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:31.591102    1466 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:03:31.591114    1466 cache.go:57] Caching tarball of preloaded images
	I1003 17:03:31.591282    1466 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1003 17:03:31.596132    1466 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1003 17:03:31.596139    1466 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:31.674025    1466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4?checksum=md5:48f32a2a1ca4194a6d2a21c3ded2b2db -> /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I1003 17:03:36.135258    1466 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I1003 17:03:36.135389    1466 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17345-986/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-278000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-278000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-585000 --alsologtostderr --binary-mirror http://127.0.0.1:49316 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-585000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/Setup (405.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:89: (dbg) Done: out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m45.441315042s)
--- PASS: TestAddons/Setup (405.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 2.234542ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-bvd9d" [e70964ce-39c1-444c-bfcc-e672f6027857] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007638708s
addons_test.go:393: (dbg) Run:  kubectl --context addons-585000 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 2.502333ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f9cd41c7-beac-436e-984c-e44cc3f6e97c] Pending
helpers_test.go:344: "task-pv-pod" [f9cd41c7-beac-436e-984c-e44cc3f6e97c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f9cd41c7-beac-436e-984c-e44cc3f6e97c] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00894825s
addons_test.go:562: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-585000 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-585000 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d41f95b7-534f-4c3e-a67e-edbb7ed0337e] Pending
helpers_test.go:344: "task-pv-pod-restore" [d41f95b7-534f-4c3e-a67e-edbb7ed0337e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d41f95b7-534f-4c3e-a67e-edbb7ed0337e] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008733875s
addons_test.go:604: (dbg) Run:  kubectl --context addons-585000 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-585000 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-585000 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.084792084s)
addons_test.go:620: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-585000 --alsologtostderr -v=1
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-pkdpk" [8c3b41e0-afab-4003-b872-cd92998ea28a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-pkdpk" [8c3b41e0-afab-4003-b872-cd92998ea28a] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.007099333s
--- PASS: TestAddons/parallel/Headlamp (12.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-dm5mt" [ed8f78fa-23ac-4057-8e73-ef46ddd8e988] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006669584s
addons_test.go:838: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-585000
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-585000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-585000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [19d9c25d-96d9-48ea-99c3-bd1b7de859a5] Pending
helpers_test.go:344: "test-local-path" [19d9c25d-96d9-48ea-99c3-bd1b7de859a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [19d9c25d-96d9-48ea-99c3-bd1b7de859a5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [19d9c25d-96d9-48ea-99c3-bd1b7de859a5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008301s
addons_test.go:869: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 ssh "cat /opt/local-path-provisioner/pvc-320167fa-02d3-46e8-a116-8a91ec031e73_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-585000 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-585000 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.312275125s)
--- PASS: TestAddons/parallel/LocalPath (52.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-585000 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-585000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-585000
addons_test.go:150: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-585000: (12.080261s)
addons_test.go:154: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-585000
addons_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-585000
addons_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-585000
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.94s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.94s)

                                                
                                    
x
+
TestErrorSpam/setup (30.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-899000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-899000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 --driver=qemu2 : (30.328988417s)
--- PASS: TestErrorSpam/setup (30.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 stop: (3.071072083s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-899000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-899000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17345-986/.minikube/files/etc/test/nested/copy/1447/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-488000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-488000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (43.494218709s)
--- PASS: TestFunctional/serial/StartWithProxy (43.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-488000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-488000 --alsologtostderr -v=8: (35.338933625s)
functional_test.go:659: soft start took 35.339365583s for "functional-488000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-488000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 cache add registry.k8s.io/pause:3.1: (1.501111625s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 cache add registry.k8s.io/pause:3.3: (1.271177375s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 cache add registry.k8s.io/pause:latest: (1.1157455s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1383661691/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cache add minikube-local-cache-test:functional-488000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 cache add minikube-local-cache-test:functional-488000: (1.497761917s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cache delete minikube-local-cache-test:functional-488000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-488000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (76.639833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 kubectl -- --context functional-488000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-488000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-488000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1003 17:25:24.585613    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:24.592384    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:24.604453    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:24.624532    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:24.666619    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:24.748689    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:24.910760    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:25.232888    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:25.875036    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:27.157209    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:25:29.719386    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-488000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.634344583s)
functional_test.go:757: restart took 36.63442375s for "functional-488000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-488000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3969196917/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-488000 apply -f testdata/invalidsvc.yaml
E1003 17:25:34.841494    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-488000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-488000: exit status 115 (112.329042ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31600 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-488000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-488000 delete -f testdata/invalidsvc.yaml: (1.162872667s)
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 config get cpus: exit status 14 (27.814708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 config get cpus: exit status 14 (27.776958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-488000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-488000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2636: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-488000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-488000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.744542ms)

                                                
                                                
-- stdout --
	* [functional-488000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:26:25.505317    2623 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:26:25.505465    2623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:25.505469    2623 out.go:309] Setting ErrFile to fd 2...
	I1003 17:26:25.505471    2623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:25.505592    2623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:26:25.506576    2623 out.go:303] Setting JSON to false
	I1003 17:26:25.522880    2623 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1559,"bootTime":1696377626,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:26:25.522952    2623 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:26:25.527957    2623 out.go:177] * [functional-488000] minikube v1.31.2 on Darwin 14.0 (arm64)
	I1003 17:26:25.535074    2623 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:26:25.539017    2623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:26:25.535150    2623 notify.go:220] Checking for updates...
	I1003 17:26:25.544854    2623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:26:25.548011    2623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:26:25.550986    2623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:26:25.553966    2623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:26:25.557338    2623 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:26:25.557570    2623 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:26:25.562028    2623 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 17:26:25.568968    2623 start.go:298] selected driver: qemu2
	I1003 17:26:25.568975    2623 start.go:902] validating driver "qemu2" against &{Name:functional-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-488000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:26:25.569021    2623 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:26:25.574990    2623 out.go:177] 
	W1003 17:26:25.578953    2623 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 17:26:25.581992    2623 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-488000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-488000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-488000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.600375ms)

                                                
                                                
-- stdout --
	* [functional-488000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:26:25.389234    2619 out.go:296] Setting OutFile to fd 1 ...
	I1003 17:26:25.389375    2619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:25.389378    2619 out.go:309] Setting ErrFile to fd 2...
	I1003 17:26:25.389380    2619 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1003 17:26:25.389520    2619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
	I1003 17:26:25.391015    2619 out.go:303] Setting JSON to false
	I1003 17:26:25.409544    2619 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1559,"bootTime":1696377626,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1003 17:26:25.409631    2619 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1003 17:26:25.413231    2619 out.go:177] * [functional-488000] minikube v1.31.2 sur Darwin 14.0 (arm64)
	I1003 17:26:25.421996    2619 out.go:177]   - MINIKUBE_LOCATION=17345
	I1003 17:26:25.426002    2619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	I1003 17:26:25.422100    2619 notify.go:220] Checking for updates...
	I1003 17:26:25.431991    2619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 17:26:25.435151    2619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:26:25.437882    2619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	I1003 17:26:25.440969    2619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:26:25.444285    2619 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1003 17:26:25.444518    2619 driver.go:373] Setting default libvirt URI to qemu:///system
	I1003 17:26:25.448967    2619 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1003 17:26:25.455997    2619 start.go:298] selected driver: qemu2
	I1003 17:26:25.456004    2619 start.go:902] validating driver "qemu2" against &{Name:functional-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-488000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1003 17:26:25.456046    2619 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:26:25.462937    2619 out.go:177] 
	W1003 17:26:25.467004    2619 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 17:26:25.469834    2619 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [09c68c2f-143a-4e5c-b525-583749597686] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012604083s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-488000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-488000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-488000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-488000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [443971cc-17cf-439e-b129-dc32b28d967c] Pending
helpers_test.go:344: "sp-pod" [443971cc-17cf-439e-b129-dc32b28d967c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1003 17:25:45.081545    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [443971cc-17cf-439e-b129-dc32b28d967c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009791333s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-488000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-488000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-488000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f27f5d28-01f4-4ccd-9cc2-211b51d5b6fb] Pending
helpers_test.go:344: "sp-pod" [f27f5d28-01f4-4ccd-9cc2-211b51d5b6fb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f27f5d28-01f4-4ccd-9cc2-211b51d5b6fb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006467875s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-488000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh -n functional-488000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 cp functional-488000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3201950889/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh -n functional-488000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1447/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /etc/test/nested/copy/1447/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1447.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /etc/ssl/certs/1447.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1447.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /usr/share/ca-certificates/1447.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /etc/ssl/certs/14472.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /usr/share/ca-certificates/14472.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-488000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "sudo systemctl is-active crio": exit status 1 (67.044625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-488000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ba3d4d03-cc14-4a75-a220-462fe3645ea7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ba3d4d03-cc14-4a75-a220-462fe3645ea7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004793583s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-488000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.176.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-488000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-488000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-488000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-w8tqc" [8d75cba7-a791-4207-8484-c811058cc91b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-w8tqc" [8d75cba7-a791-4207-8484-c811058cc91b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.007614708s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 service list -o json
functional_test.go:1493: Took "280.332708ms" to run "out/minikube-darwin-arm64 -p functional-488000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31329
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31329
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "113.208709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.675709ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "115.521458ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "32.686541ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696379169954988000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696379169954988000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696379169954988000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001/test-1696379169954988000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (58.963583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_796ee60b681747e4390cd7790cd689f5e0efb632_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 00:26 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 00:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 00:26 test-1696379169954988000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh cat /mount-9p/test-1696379169954988000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-488000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ef8e1b32-a71d-465d-98ee-46e71af8a8a0] Pending
helpers_test.go:344: "busybox-mount" [ef8e1b32-a71d-465d-98ee-46e71af8a8a0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ef8e1b32-a71d-465d-98ee-46e71af8a8a0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ef8e1b32-a71d-465d-98ee-46e71af8a8a0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.008892292s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-488000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2195568235/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2794950848/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.534333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2794950848/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "sudo umount -f /mount-9p": exit status 1 (63.971125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-488000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2794950848/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-488000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-488000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-488000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-488000 image ls --format short --alsologtostderr:
I1003 17:26:40.763136    2825 out.go:296] Setting OutFile to fd 1 ...
I1003 17:26:40.763285    2825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.763288    2825 out.go:309] Setting ErrFile to fd 2...
I1003 17:26:40.763290    2825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.763439    2825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:26:40.763845    2825 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.763922    2825 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.764725    2825 ssh_runner.go:195] Run: systemctl --version
I1003 17:26:40.764737    2825 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/id_rsa Username:docker}
I1003 17:26:40.794737    2825 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-488000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.2           | 7da62c127fc0f | 68.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | df8fd1ca35d66 | 43.5MB |
| docker.io/library/nginx                     | latest            | 2a4fbb36e9660 | 192MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 89d57b83c1786 | 116MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 64fc40cee3716 | 57.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-488000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-488000 | b7914a3bfbdc5 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.2           | 30bb499447fe1 | 120MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-488000 image ls --format table --alsologtostderr:
I1003 17:26:40.845779    2829 out.go:296] Setting OutFile to fd 1 ...
I1003 17:26:40.845937    2829 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.845940    2829 out.go:309] Setting ErrFile to fd 2...
I1003 17:26:40.845943    2829 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.846086    2829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:26:40.846501    2829 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.846563    2829 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.847321    2829 ssh_runner.go:195] Run: systemctl --version
I1003 17:26:40.847341    2829 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/id_rsa Username:docker}
I1003 17:26:40.877055    2829 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-488000 image ls --format json --alsologtostderr:
[{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43500000"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"120000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"64fc40cee3716a4596d219b360ce5
36adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"57800000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"68300000"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":[],"repoTags":["registry.k8
s.io/kube-controller-manager:v1.28.2"],"size":"116000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-488000"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b7914a3bfbdc5c913f88c0a190d309131b65b5314afd669451c6beee49e86f74","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-488000"],"size":"30"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":[],"repoTags":["docker.io/library/nginx:lat
est"],"size":"192000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-488000 image ls --format json --alsologtostderr:
I1003 17:26:40.842397    2828 out.go:296] Setting OutFile to fd 1 ...
I1003 17:26:40.842553    2828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.842557    2828 out.go:309] Setting ErrFile to fd 2...
I1003 17:26:40.842559    2828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.842703    2828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:26:40.843126    2828 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.843182    2828 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.844211    2828 ssh_runner.go:195] Run: systemctl --version
I1003 17:26:40.844221    2828 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/id_rsa Username:docker}
I1003 17:26:40.877190    2828 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-488000 image ls --format yaml --alsologtostderr:
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43500000"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "68300000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "57800000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b7914a3bfbdc5c913f88c0a190d309131b65b5314afd669451c6beee49e86f74
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-488000
size: "30"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "116000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-488000
size: "32900000"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "120000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-488000 image ls --format yaml --alsologtostderr:
I1003 17:26:40.763081    2824 out.go:296] Setting OutFile to fd 1 ...
I1003 17:26:40.763275    2824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.763279    2824 out.go:309] Setting ErrFile to fd 2...
I1003 17:26:40.763282    2824 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.763439    2824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:26:40.763889    2824 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.763945    2824 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.765210    2824 ssh_runner.go:195] Run: systemctl --version
I1003 17:26:40.765219    2824 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/id_rsa Username:docker}
I1003 17:26:40.795899    2824 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W1003 17:26:40.811918    2824 root.go:91] failed to log command end to audit: failed to find a log row with id equals to cb06fd0a-455f-43a2-ada9-29d14d29b992
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh pgrep buildkitd: exit status 1 (62.198959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image build -t localhost/my-image:functional-488000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 image build -t localhost/my-image:functional-488000 testdata/build --alsologtostderr: (1.436464584s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-488000 image build -t localhost/my-image:functional-488000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 25c5f6f973eb
Removing intermediate container 25c5f6f973eb
---> 7b75bc84e012
Step 3/3 : ADD content.txt /
---> ff059ce68acb
Successfully built ff059ce68acb
Successfully tagged localhost/my-image:functional-488000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-488000 image build -t localhost/my-image:functional-488000 testdata/build --alsologtostderr:
I1003 17:26:40.983065    2834 out.go:296] Setting OutFile to fd 1 ...
I1003 17:26:40.983316    2834 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.983320    2834 out.go:309] Setting ErrFile to fd 2...
I1003 17:26:40.983322    2834 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 17:26:40.983453    2834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17345-986/.minikube/bin
I1003 17:26:40.983899    2834 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.984682    2834 config.go:182] Loaded profile config "functional-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1003 17:26:40.985596    2834 ssh_runner.go:195] Run: systemctl --version
I1003 17:26:40.985610    2834 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17345-986/.minikube/machines/functional-488000/id_rsa Username:docker}
I1003 17:26:41.017272    2834 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3009943951.tar
I1003 17:26:41.017334    2834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 17:26:41.021008    2834 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3009943951.tar
I1003 17:26:41.022417    2834 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3009943951.tar: stat -c "%s %y" /var/lib/minikube/build/build.3009943951.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3009943951.tar': No such file or directory
I1003 17:26:41.022428    2834 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3009943951.tar --> /var/lib/minikube/build/build.3009943951.tar (3072 bytes)
I1003 17:26:41.029590    2834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3009943951
I1003 17:26:41.032842    2834 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3009943951 -xf /var/lib/minikube/build/build.3009943951.tar
I1003 17:26:41.035648    2834 docker.go:340] Building image: /var/lib/minikube/build/build.3009943951
I1003 17:26:41.035685    2834 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-488000 /var/lib/minikube/build/build.3009943951
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1003 17:26:42.378891    2834 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-488000 /var/lib/minikube/build/build.3009943951: (1.343220125s)
I1003 17:26:42.378976    2834 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3009943951
I1003 17:26:42.382144    2834 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3009943951.tar
I1003 17:26:42.384932    2834 build_images.go:207] Built localhost/my-image:functional-488000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3009943951.tar
I1003 17:26:42.384946    2834 build_images.go:123] succeeded building to: functional-488000
I1003 17:26:42.384949    2834 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.711659416s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-488000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image load --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr
2023/10/03 17:26:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 image load --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr: (2.231004s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-488000 docker-env) && out/minikube-darwin-arm64 status -p functional-488000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-488000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image load --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 image load --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr: (1.453785167s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.593999834s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-488000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image load --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-488000 image load --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr: (1.842057s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image save gcr.io/google-containers/addon-resizer:functional-488000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image rm gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-488000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 image save --daemon gcr.io/google-containers/addon-resizer:functional-488000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-488000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-488000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-488000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-488000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-329000 --driver=qemu2 
E1003 17:26:46.525201    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-329000 --driver=qemu2 : (29.4564195s)
--- PASS: TestImageBuild/serial/Setup (29.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-329000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-329000: (1.031469s)
--- PASS: TestImageBuild/serial/NormalBuild (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-329000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-329000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (71.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-830000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E1003 17:28:08.445948    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-830000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m11.47590675s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (71.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons enable ingress --alsologtostderr -v=5: (17.352571625s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-830000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-246000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-246000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (47.473071s)
--- PASS: TestJSONOutput/start/Command (47.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-246000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-246000 --output=json --user=testUser
E1003 17:30:24.579745    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-246000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-246000 --output=json --user=testUser: (12.07344375s)
--- PASS: TestJSONOutput/stop/Command (12.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-766000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-766000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.92975ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d5da8a4-19c1-4edf-bd01-37b4b9f1891f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-766000] minikube v1.31.2 on Darwin 14.0 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ffadefd-4aa6-4546-a3b7-b120f163dcf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17345"}}
	{"specversion":"1.0","id":"f30b1ad1-3408-487e-bf47-c381a08f3733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig"}}
	{"specversion":"1.0","id":"01859841-e77f-43da-9be5-5f06fe3c41ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ba172a16-6313-49af-8c9b-9a9873c46239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b17449b-f94f-4521-8e93-ca2699626aa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube"}}
	{"specversion":"1.0","id":"95adbc9d-70d0-49ba-bc4d-53bd932276e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c6f96ba-f102-411a-9ae6-0007d624bc1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-766000
--- PASS: TestErrorJSONOutput (0.34s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (60.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-716000 --driver=qemu2 
E1003 17:30:37.541588    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:37.547917    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:37.559973    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:37.582026    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:37.624069    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:37.706140    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:37.868220    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:38.190302    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:38.832663    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:40.114925    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:42.676207    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:47.798291    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
E1003 17:30:52.284971    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/addons-585000/client.crt: no such file or directory
E1003 17:30:58.040479    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-716000 --driver=qemu2 : (28.964221083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-718000 --driver=qemu2 
E1003 17:31:18.522385    1447 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17345-986/.minikube/profiles/functional-488000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-718000 --driver=qemu2 : (30.621960834s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-716000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-718000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-718000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-718000
helpers_test.go:175: Cleaning up "first-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-716000
--- PASS: TestMinikubeProfile (60.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-483000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.863875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-483000] minikube v1.31.2 on Darwin 14.0 (arm64)
	  - MINIKUBE_LOCATION=17345
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17345-986/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17345-986/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-483000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-483000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (48.414667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-483000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-483000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-483000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-483000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.404792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-483000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-489000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-489000 -n old-k8s-version-489000: exit status 7 (28.143042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-489000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-387000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-387000 -n no-preload-387000: exit status 7 (28.829625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-387000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-391000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-391000 -n embed-certs-391000: exit status 7 (28.940667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-391000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-776000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-776000 -n default-k8s-diff-port-776000: exit status 7 (28.559375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-776000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-062000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-062000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-062000 -n newest-cni-062000: exit status 7 (29.052292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-062000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/256)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:422: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1: exit status 1 (72.296208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2: exit status 1 (60.136334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2: exit status 1 (60.450875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2: exit status 1 (60.071208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2: exit status 1 (61.397333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2: exit status 1 (59.849417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-488000 ssh "findmnt -T" /mount2: exit status 1 (59.690417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-488000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2409114035/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.98s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-991000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-991000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-991000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-991000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991000"

                                                
                                                
----------------------- debugLogs end: cilium-991000 [took: 2.286232833s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-991000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-991000
--- SKIP: TestNetworkPlugins/group/cilium (2.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-145000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-145000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard