Test Report: QEMU_macOS 17263

                    
                      9c7b220a3b46302c250803ffb8def25eadaf0a12:2023-09-18:31068
                    
                

Test fail (83/260)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.19
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.94
24 TestAddons/parallel/Registry 720.82
25 TestAddons/parallel/Ingress 0.78
32 TestAddons/parallel/CloudSpanner 805.1
37 TestCertOptions 9.99
38 TestCertExpiration 195.09
39 TestDockerFlags 10.05
40 TestForceSystemdFlag 9.98
41 TestForceSystemdEnv 10.2
86 TestFunctional/parallel/ServiceCmdConnect 42.71
153 TestImageBuild/serial/BuildWithBuildArg 1.08
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.85
194 TestMinikubeProfile 76.57
202 TestMountStart/serial/VerifyMountPostDelete 101.03
211 TestMultiNode/serial/StopNode 378.17
212 TestMultiNode/serial/StartAfterStop 230.15
213 TestMultiNode/serial/RestartKeepsNodes 41.52
214 TestMultiNode/serial/DeleteNode 0.1
215 TestMultiNode/serial/StopMultiNode 0.17
216 TestMultiNode/serial/RestartMultiNode 5.24
217 TestMultiNode/serial/ValidateNameConflict 10.72
221 TestPreload 10.06
223 TestScheduledStopUnix 9.77
224 TestSkaffold 12.07
227 TestRunningBinaryUpgrade 169.13
229 TestKubernetesUpgrade 15.34
242 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.69
243 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.85
244 TestStoppedBinaryUpgrade/Setup 176.32
246 TestPause/serial/Start 9.75
256 TestNoKubernetes/serial/StartWithK8s 9.86
257 TestNoKubernetes/serial/StartWithStopK8s 5.31
258 TestNoKubernetes/serial/Start 5.32
262 TestNoKubernetes/serial/StartNoArgs 5.3
264 TestNetworkPlugins/group/auto/Start 9.96
265 TestNetworkPlugins/group/kindnet/Start 9.95
266 TestNetworkPlugins/group/calico/Start 10.02
267 TestNetworkPlugins/group/custom-flannel/Start 9.74
268 TestNetworkPlugins/group/false/Start 9.82
269 TestNetworkPlugins/group/enable-default-cni/Start 9.93
270 TestNetworkPlugins/group/flannel/Start 9.76
271 TestNetworkPlugins/group/bridge/Start 9.81
272 TestNetworkPlugins/group/kubenet/Start 9.74
274 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
275 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
279 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
280 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.05
282 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.06
283 TestStartStop/group/old-k8s-version/serial/Pause 0.1
285 TestStartStop/group/no-preload/serial/FirstStart 9.71
286 TestStartStop/group/no-preload/serial/DeployApp 0.09
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
290 TestStartStop/group/no-preload/serial/SecondStart 7.03
291 TestStoppedBinaryUpgrade/Upgrade 2.32
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
294 TestStartStop/group/embed-certs/serial/FirstStart 10.01
295 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
296 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
297 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
298 TestStartStop/group/no-preload/serial/Pause 0.09
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.82
301 TestStartStop/group/embed-certs/serial/DeployApp 0.09
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/embed-certs/serial/SecondStart 5.25
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.06
314 TestStartStop/group/embed-certs/serial/Pause 0.09
316 TestStartStop/group/newest-cni/serial/FirstStart 9.89
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
319 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
325 TestStartStop/group/newest-cni/serial/SecondStart 5.24
328 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (17.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-242000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-242000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.18728125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a4292e4-b52f-443b-842f-5ccf28323b5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-242000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bcb4809-7d41-4ac9-a264-6a3b9f75fe90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17263"}}
	{"specversion":"1.0","id":"187c1539-12ce-4bfc-95b4-f86037e2aa31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig"}}
	{"specversion":"1.0","id":"af0fe2df-0392-4bd2-97be-82164d28ea3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b5539383-3de7-4f01-80b7-32b20f3a9dc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b4e229c-e30d-4e8c-a58f-7c00d22c8a44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube"}}
	{"specversion":"1.0","id":"6ba644e3-9410-448b-9bc2-e3672dd857ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a8a053f1-cbaa-463a-8d34-75da73bfee5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"50ece0ce-e7cd-4644-8ff1-68f87375e67b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"92b70260-8e39-4249-9978-b059c5d05e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddb71033-f4fa-415f-8ee6-7fa282313a79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-242000 in cluster download-only-242000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"36526bff-16af-41e5-9ade-29c255eb4d66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"81e3b15c-77c9-45ff-8968-31555d1ed0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0] Decompressors:map[bz2:0x1400013cdd0 gz:0x1400013cdd8 tar:0x1400013cd10 tar.bz2:0x1400013cd20 tar.gz:0x1400013cd30 tar.xz:0x1400013cd50 tar.zst:0x1400013cd60 tbz2:0x1400013cd20 tgz:0x140001
3cd30 txz:0x1400013cd50 tzst:0x1400013cd60 xz:0x1400013cde0 zip:0x1400013ce20 zst:0x1400013cde8] Getters:map[file:0x14000cbcdb0 http:0x1400017e910 https:0x1400017e960] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a9e28e35-ed81-47e0-b44c-ec563048bf85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 11:51:43.673352    1670 out.go:296] Setting OutFile to fd 1 ...
	I0918 11:51:43.673496    1670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:51:43.673499    1670 out.go:309] Setting ErrFile to fd 2...
	I0918 11:51:43.673502    1670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:51:43.673628    1670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	W0918 11:51:43.673712    1670 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17263-1251/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17263-1251/.minikube/config/config.json: no such file or directory
	I0918 11:51:43.674814    1670 out.go:303] Setting JSON to true
	I0918 11:51:43.691096    1670 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1695061826,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 11:51:43.691153    1670 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 11:51:43.696788    1670 out.go:97] [download-only-242000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 11:51:43.700713    1670 out.go:169] MINIKUBE_LOCATION=17263
	I0918 11:51:43.696908    1670 notify.go:220] Checking for updates...
	W0918 11:51:43.696937    1670 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 11:51:43.707561    1670 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:51:43.711781    1670 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 11:51:43.714758    1670 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 11:51:43.716116    1670 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	W0918 11:51:43.721757    1670 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 11:51:43.721987    1670 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 11:51:43.727772    1670 out.go:97] Using the qemu2 driver based on user configuration
	I0918 11:51:43.727778    1670 start.go:298] selected driver: qemu2
	I0918 11:51:43.727792    1670 start.go:902] validating driver "qemu2" against <nil>
	I0918 11:51:43.727853    1670 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 11:51:43.731695    1670 out.go:169] Automatically selected the socket_vmnet network
	I0918 11:51:43.737213    1670 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0918 11:51:43.737298    1670 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 11:51:43.737361    1670 cni.go:84] Creating CNI manager for ""
	I0918 11:51:43.737378    1670 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 11:51:43.737384    1670 start_flags.go:321] config:
	{Name:download-only-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-242000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:51:43.742567    1670 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 11:51:43.745703    1670 out.go:97] Downloading VM boot image ...
	I0918 11:51:43.745736    1670 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso
	I0918 11:51:50.801506    1670 out.go:97] Starting control plane node download-only-242000 in cluster download-only-242000
	I0918 11:51:50.801529    1670 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 11:51:50.861845    1670 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 11:51:50.861853    1670 cache.go:57] Caching tarball of preloaded images
	I0918 11:51:50.862024    1670 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 11:51:50.865690    1670 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0918 11:51:50.865697    1670 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:51:50.946758    1670 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 11:51:59.686637    1670 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:51:59.686787    1670 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:52:00.326560    1670 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0918 11:52:00.326753    1670 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/download-only-242000/config.json ...
	I0918 11:52:00.326771    1670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/download-only-242000/config.json: {Name:mk2d38f7178624dd8e5685d2e554cb81270be80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:00.327027    1670 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 11:52:00.327185    1670 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0918 11:52:00.800545    1670 out.go:169] 
	W0918 11:52:00.804618    1670 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0] Decompressors:map[bz2:0x1400013cdd0 gz:0x1400013cdd8 tar:0x1400013cd10 tar.bz2:0x1400013cd20 tar.gz:0x1400013cd30 tar.xz:0x1400013cd50 tar.zst:0x1400013cd60 tbz2:0x1400013cd20 tgz:0x1400013cd30 txz:0x1400013cd50 tzst:0x1400013cd60 xz:0x1400013cde0 zip:0x1400013ce20 zst:0x1400013cde8] Getters:map[file:0x14000cbcdb0 http:0x1400017e910 https:0x1400017e960] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0918 11:52:00.804643    1670 out_reason.go:110] 
	W0918 11:52:00.809566    1670 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 11:52:00.813560    1670 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-242000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (17.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-612000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-612000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.768181584s)

                                                
                                                
-- stdout --
	* [offline-docker-612000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-612000 in cluster offline-docker-612000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-612000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:38:08.292178    3838 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:38:08.292340    3838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:08.292343    3838 out.go:309] Setting ErrFile to fd 2...
	I0918 12:38:08.292346    3838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:08.292481    3838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:38:08.293470    3838 out.go:303] Setting JSON to false
	I0918 12:38:08.309952    3838 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4062,"bootTime":1695061826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:38:08.310038    3838 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:38:08.314715    3838 out.go:177] * [offline-docker-612000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:38:08.326750    3838 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:38:08.330726    3838 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:38:08.326858    3838 notify.go:220] Checking for updates...
	I0918 12:38:08.336590    3838 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:38:08.339896    3838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:38:08.342698    3838 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:38:08.344192    3838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:38:08.347870    3838 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:38:08.351769    3838 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:38:08.357748    3838 start.go:298] selected driver: qemu2
	I0918 12:38:08.357754    3838 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:38:08.357760    3838 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:38:08.359815    3838 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:38:08.362748    3838 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:38:08.365767    3838 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:38:08.365784    3838 cni.go:84] Creating CNI manager for ""
	I0918 12:38:08.365790    3838 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:38:08.365793    3838 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:38:08.365797    3838 start_flags.go:321] config:
	{Name:offline-docker-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-612000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:38:08.369974    3838 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:38:08.372629    3838 out.go:177] * Starting control plane node offline-docker-612000 in cluster offline-docker-612000
	I0918 12:38:08.380687    3838 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:38:08.380712    3838 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:38:08.380723    3838 cache.go:57] Caching tarball of preloaded images
	I0918 12:38:08.380791    3838 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:38:08.380796    3838 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:38:08.380995    3838 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/offline-docker-612000/config.json ...
	I0918 12:38:08.381008    3838 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/offline-docker-612000/config.json: {Name:mkbaf75ec2851f5165d7018699b7880738d6fc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:08.381225    3838 start.go:365] acquiring machines lock for offline-docker-612000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:08.381262    3838 start.go:369] acquired machines lock for "offline-docker-612000" in 28.75µs
	I0918 12:38:08.381275    3838 start.go:93] Provisioning new machine with config: &{Name:offline-docker-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-612000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:08.381301    3838 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:08.385610    3838 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:08.399705    3838 start.go:159] libmachine.API.Create for "offline-docker-612000" (driver="qemu2")
	I0918 12:38:08.399729    3838 client.go:168] LocalClient.Create starting
	I0918 12:38:08.399803    3838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:08.399828    3838 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:08.399840    3838 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:08.399885    3838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:08.399903    3838 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:08.399912    3838 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:08.400245    3838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:08.515564    3838 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:08.611751    3838 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:08.611764    3838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:08.611919    3838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2
	I0918 12:38:08.620628    3838 main.go:141] libmachine: STDOUT: 
	I0918 12:38:08.620643    3838 main.go:141] libmachine: STDERR: 
	I0918 12:38:08.620694    3838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2 +20000M
	I0918 12:38:08.628401    3838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:08.628424    3838 main.go:141] libmachine: STDERR: 
	I0918 12:38:08.628453    3838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2
	I0918 12:38:08.628462    3838 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:08.628508    3838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:28:91:25:d0:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2
	I0918 12:38:08.630270    3838 main.go:141] libmachine: STDOUT: 
	I0918 12:38:08.630297    3838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:08.630321    3838 client.go:171] LocalClient.Create took 230.588541ms
	I0918 12:38:10.632348    3838 start.go:128] duration metric: createHost completed in 2.251081958s
	I0918 12:38:10.632366    3838 start.go:83] releasing machines lock for "offline-docker-612000", held for 2.251141916s
	W0918 12:38:10.632379    3838 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:10.637847    3838 out.go:177] * Deleting "offline-docker-612000" in qemu2 ...
	W0918 12:38:10.655236    3838 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:10.655243    3838 start.go:703] Will try again in 5 seconds ...
	I0918 12:38:15.657308    3838 start.go:365] acquiring machines lock for offline-docker-612000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:15.657592    3838 start.go:369] acquired machines lock for "offline-docker-612000" in 216.459µs
	I0918 12:38:15.657683    3838 start.go:93] Provisioning new machine with config: &{Name:offline-docker-612000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-docker-612000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:15.657851    3838 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:15.669078    3838 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:15.710262    3838 start.go:159] libmachine.API.Create for "offline-docker-612000" (driver="qemu2")
	I0918 12:38:15.710305    3838 client.go:168] LocalClient.Create starting
	I0918 12:38:15.710440    3838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:15.710500    3838 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:15.710534    3838 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:15.710609    3838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:15.710657    3838 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:15.710674    3838 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:15.711196    3838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:15.898596    3838 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:15.977592    3838 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:15.977601    3838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:15.977744    3838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2
	I0918 12:38:15.986454    3838 main.go:141] libmachine: STDOUT: 
	I0918 12:38:15.986469    3838 main.go:141] libmachine: STDERR: 
	I0918 12:38:15.986523    3838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2 +20000M
	I0918 12:38:15.993922    3838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:15.993936    3838 main.go:141] libmachine: STDERR: 
	I0918 12:38:15.993951    3838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2
	I0918 12:38:15.993959    3838 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:15.993992    3838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d5:01:25:08:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/offline-docker-612000/disk.qcow2
	I0918 12:38:15.995560    3838 main.go:141] libmachine: STDOUT: 
	I0918 12:38:15.995575    3838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:15.995588    3838 client.go:171] LocalClient.Create took 285.283042ms
	I0918 12:38:17.997761    3838 start.go:128] duration metric: createHost completed in 2.339917625s
	I0918 12:38:17.997887    3838 start.go:83] releasing machines lock for "offline-docker-612000", held for 2.340272291s
	W0918 12:38:17.998382    3838 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-612000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:18.006048    3838 out.go:177] 
	W0918 12:38:18.010092    3838 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:38:18.010118    3838 out.go:239] * 
	* 
	W0918 12:38:18.012749    3838 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:38:18.021018    3838 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-612000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-09-18 12:38:18.035404 -0700 PDT m=+2794.512153792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-612000 -n offline-docker-612000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-612000 -n offline-docker-612000: exit status 7 (65.498916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-612000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-612000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-612000
--- FAIL: TestOffline (9.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001488375s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-221000 -n addons-221000
addons_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-221000 -n addons-221000: exit status 7 (35.473917ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:11:00.331738    1990 status.go:249] status error: host: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:308: status error: exit status 7 (may be ok)
addons_test.go:308: "addons-221000" apiserver is not running, skipping kubectl commands (state="Nonexistent")
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-221000 -n addons-221000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-221000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:51 PDT |                     |
	|         | -p download-only-242000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |                     |
	|         | -p download-only-242000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| delete  | -p download-only-242000        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| delete  | -p download-only-242000        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| start   | --download-only -p             | binary-mirror-077000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |                     |
	|         | binary-mirror-077000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49414         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-077000        | binary-mirror-077000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| start   | -p addons-221000               | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:59 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT |                     |
	|         | addons-221000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 11:52:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 11:52:16.711602    1740 out.go:296] Setting OutFile to fd 1 ...
	I0918 11:52:16.711748    1740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:16.711751    1740 out.go:309] Setting ErrFile to fd 2...
	I0918 11:52:16.711753    1740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:16.711880    1740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 11:52:16.712918    1740 out.go:303] Setting JSON to false
	I0918 11:52:16.728001    1740 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1310,"bootTime":1695061826,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 11:52:16.728087    1740 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 11:52:16.732378    1740 out.go:177] * [addons-221000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 11:52:16.739454    1740 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 11:52:16.743421    1740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:52:16.739507    1740 notify.go:220] Checking for updates...
	I0918 11:52:16.749403    1740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 11:52:16.752377    1740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 11:52:16.755381    1740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 11:52:16.758417    1740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 11:52:16.761446    1740 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 11:52:16.765371    1740 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 11:52:16.777355    1740 start.go:298] selected driver: qemu2
	I0918 11:52:16.777364    1740 start.go:902] validating driver "qemu2" against <nil>
	I0918 11:52:16.777372    1740 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 11:52:16.779390    1740 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 11:52:16.782385    1740 out.go:177] * Automatically selected the socket_vmnet network
	I0918 11:52:16.785462    1740 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 11:52:16.785488    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:16.785496    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:16.785507    1740 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 11:52:16.785513    1740 start_flags.go:321] config:
	{Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0918 11:52:16.789634    1740 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 11:52:16.798394    1740 out.go:177] * Starting control plane node addons-221000 in cluster addons-221000
	I0918 11:52:16.802195    1740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:16.802217    1740 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 11:52:16.802234    1740 cache.go:57] Caching tarball of preloaded images
	I0918 11:52:16.802301    1740 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 11:52:16.802315    1740 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 11:52:16.802542    1740 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json ...
	I0918 11:52:16.802555    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json: {Name:mk6624c585fbc7911138df2cd59d1f2e10251cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:16.802799    1740 start.go:365] acquiring machines lock for addons-221000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 11:52:16.802873    1740 start.go:369] acquired machines lock for "addons-221000" in 68.417µs
	I0918 11:52:16.802886    1740 start.go:93] Provisioning new machine with config: &{Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 11:52:16.802925    1740 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 11:52:16.810242    1740 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 11:52:17.161676    1740 start.go:159] libmachine.API.Create for "addons-221000" (driver="qemu2")
	I0918 11:52:17.161722    1740 client.go:168] LocalClient.Create starting
	I0918 11:52:17.161932    1740 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 11:52:17.253776    1740 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 11:52:17.312301    1740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 11:52:17.776256    1740 main.go:141] libmachine: Creating SSH key...
	I0918 11:52:17.897328    1740 main.go:141] libmachine: Creating Disk image...
	I0918 11:52:17.897334    1740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 11:52:17.897524    1740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:17.933044    1740 main.go:141] libmachine: STDOUT: 
	I0918 11:52:17.933072    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:17.933136    1740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2 +20000M
	I0918 11:52:17.940597    1740 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 11:52:17.940609    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:17.940623    1740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:17.940628    1740 main.go:141] libmachine: Starting QEMU VM...
	I0918 11:52:17.940657    1740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ae:e8:0a:fd:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:18.008779    1740 main.go:141] libmachine: STDOUT: 
	I0918 11:52:18.008804    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:18.008808    1740 main.go:141] libmachine: Attempt 0
	I0918 11:52:18.008820    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:20.011046    1740 main.go:141] libmachine: Attempt 1
	I0918 11:52:20.011124    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:22.013479    1740 main.go:141] libmachine: Attempt 2
	I0918 11:52:22.013559    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:24.015632    1740 main.go:141] libmachine: Attempt 3
	I0918 11:52:24.015645    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:26.017675    1740 main.go:141] libmachine: Attempt 4
	I0918 11:52:26.017681    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:28.018761    1740 main.go:141] libmachine: Attempt 5
	I0918 11:52:28.018782    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:30.020886    1740 main.go:141] libmachine: Attempt 6
	I0918 11:52:30.020920    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:30.021070    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 11:52:30.021123    1740 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6509edec}
	I0918 11:52:30.021130    1740 main.go:141] libmachine: Found match: ce:ae:e8:a:fd:16
	I0918 11:52:30.021140    1740 main.go:141] libmachine: IP: 192.168.105.2
	I0918 11:52:30.021147    1740 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0918 11:52:31.026067    1740 machine.go:88] provisioning docker machine ...
	I0918 11:52:31.026085    1740 buildroot.go:166] provisioning hostname "addons-221000"
	I0918 11:52:31.026964    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.027231    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.027237    1740 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-221000 && echo "addons-221000" | sudo tee /etc/hostname
	I0918 11:52:31.084404    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-221000
	
	I0918 11:52:31.084473    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.084732    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.084740    1740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-221000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-221000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-221000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 11:52:31.144009    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 11:52:31.144022    1740 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17263-1251/.minikube CaCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17263-1251/.minikube}
	I0918 11:52:31.144033    1740 buildroot.go:174] setting up certificates
	I0918 11:52:31.144038    1740 provision.go:83] configureAuth start
	I0918 11:52:31.144042    1740 provision.go:138] copyHostCerts
	I0918 11:52:31.144138    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem (1082 bytes)
	I0918 11:52:31.144342    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem (1123 bytes)
	I0918 11:52:31.144435    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem (1679 bytes)
	I0918 11:52:31.144503    1740 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem org=jenkins.addons-221000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-221000]
	I0918 11:52:31.225327    1740 provision.go:172] copyRemoteCerts
	I0918 11:52:31.225385    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 11:52:31.225394    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.256352    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 11:52:31.263330    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 11:52:31.270197    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 11:52:31.276715    1740 provision.go:86] duration metric: configureAuth took 132.670667ms
	I0918 11:52:31.276723    1740 buildroot.go:189] setting minikube options for container-runtime
	I0918 11:52:31.276820    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:52:31.276857    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.277075    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.277080    1740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 11:52:31.337901    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 11:52:31.337910    1740 buildroot.go:70] root file system type: tmpfs
	I0918 11:52:31.337970    1740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 11:52:31.338012    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.338275    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.338315    1740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 11:52:31.400816    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 11:52:31.400863    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.401116    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.401126    1740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 11:52:31.746670    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 11:52:31.746684    1740 machine.go:91] provisioned docker machine in 720.613ms
	I0918 11:52:31.746690    1740 client.go:171] LocalClient.Create took 14.585099291s
	I0918 11:52:31.746703    1740 start.go:167] duration metric: libmachine.API.Create for "addons-221000" took 14.585173417s
	I0918 11:52:31.746707    1740 start.go:300] post-start starting for "addons-221000" (driver="qemu2")
	I0918 11:52:31.746711    1740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 11:52:31.746780    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 11:52:31.746790    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.775601    1740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 11:52:31.776975    1740 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 11:52:31.776983    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/addons for local assets ...
	I0918 11:52:31.777055    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/files for local assets ...
	I0918 11:52:31.777083    1740 start.go:303] post-start completed in 30.374292ms
	I0918 11:52:31.777437    1740 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json ...
	I0918 11:52:31.777603    1740 start.go:128] duration metric: createHost completed in 14.974815417s
	I0918 11:52:31.777667    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.777884    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.777888    1740 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0918 11:52:31.833629    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695063151.431021085
	
	I0918 11:52:31.833635    1740 fix.go:206] guest clock: 1695063151.431021085
	I0918 11:52:31.833638    1740 fix.go:219] Guest: 2023-09-18 11:52:31.431021085 -0700 PDT Remote: 2023-09-18 11:52:31.777608 -0700 PDT m=+15.083726834 (delta=-346.586915ms)
	I0918 11:52:31.833654    1740 fix.go:190] guest clock delta is within tolerance: -346.586915ms
	I0918 11:52:31.833656    1740 start.go:83] releasing machines lock for "addons-221000", held for 15.0309195s
	I0918 11:52:31.833905    1740 ssh_runner.go:195] Run: cat /version.json
	I0918 11:52:31.833915    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.833930    1740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 11:52:31.833973    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.902596    1740 ssh_runner.go:195] Run: systemctl --version
	I0918 11:52:31.904777    1740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 11:52:31.906638    1740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 11:52:31.906668    1740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 11:52:31.911697    1740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 11:52:31.911704    1740 start.go:469] detecting cgroup driver to use...
	I0918 11:52:31.911799    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 11:52:31.917320    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0918 11:52:31.920500    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 11:52:31.923811    1740 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 11:52:31.923843    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 11:52:31.926950    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 11:52:31.929666    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 11:52:31.932680    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 11:52:31.936002    1740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 11:52:31.939362    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 11:52:31.942186    1740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 11:52:31.944734    1740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 11:52:31.947664    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:32.027464    1740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 11:52:32.036559    1740 start.go:469] detecting cgroup driver to use...
	I0918 11:52:32.036614    1740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 11:52:32.042440    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 11:52:32.047583    1740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 11:52:32.053840    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 11:52:32.058225    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 11:52:32.062305    1740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 11:52:32.098720    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 11:52:32.103440    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 11:52:32.108844    1740 ssh_runner.go:195] Run: which cri-dockerd
	I0918 11:52:32.110173    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 11:52:32.112731    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 11:52:32.117532    1740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 11:52:32.194769    1740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 11:52:32.269401    1740 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 11:52:32.269417    1740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0918 11:52:32.274373    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:32.355030    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 11:52:33.517984    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1629475s)
	I0918 11:52:33.518044    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 11:52:33.595160    1740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 11:52:33.670332    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 11:52:33.746578    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:33.822625    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 11:52:33.829925    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:33.909957    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0918 11:52:33.933398    1740 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 11:52:33.933487    1740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 11:52:33.935757    1740 start.go:537] Will wait 60s for crictl version
	I0918 11:52:33.935801    1740 ssh_runner.go:195] Run: which crictl
	I0918 11:52:33.937147    1740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 11:52:33.952602    1740 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0918 11:52:33.952673    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 11:52:33.962082    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 11:52:33.975334    1740 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0918 11:52:33.975416    1740 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0918 11:52:33.976970    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 11:52:33.980853    1740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:33.980897    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 11:52:33.986139    1740 docker.go:636] Got preloaded images: 
	I0918 11:52:33.986147    1740 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0918 11:52:33.986189    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 11:52:33.988954    1740 ssh_runner.go:195] Run: which lz4
	I0918 11:52:33.990479    1740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0918 11:52:33.991766    1740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 11:52:33.991780    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0918 11:52:35.310513    1740 docker.go:600] Took 1.320057 seconds to copy over tarball
	I0918 11:52:35.310582    1740 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 11:52:36.348518    1740 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.03793175s)
	I0918 11:52:36.348535    1740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 11:52:36.364745    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 11:52:36.368295    1740 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0918 11:52:36.373429    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:36.450305    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 11:52:38.940530    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.490231167s)
	I0918 11:52:38.940627    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 11:52:38.946699    1740 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 11:52:38.946709    1740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 11:52:38.946766    1740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 11:52:38.954428    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:38.954439    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:38.954458    1740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 11:52:38.954467    1740 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-221000 NodeName:addons-221000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 11:52:38.954540    1740 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-221000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 11:52:38.954592    1740 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-221000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 11:52:38.954661    1740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 11:52:38.957531    1740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 11:52:38.957562    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 11:52:38.960385    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0918 11:52:38.965592    1740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 11:52:38.970301    1740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0918 11:52:38.975165    1740 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0918 11:52:38.976400    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 11:52:38.980255    1740 certs.go:56] Setting up /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000 for IP: 192.168.105.2
	I0918 11:52:38.980276    1740 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac81ee65979b8c4f5ece6243c3a0190531689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:38.980470    1740 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key
	I0918 11:52:39.170828    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt ...
	I0918 11:52:39.170844    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt: {Name:mk0f303ee67627c25d1d04e1887861f15cdad763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.171150    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key ...
	I0918 11:52:39.171155    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key: {Name:mkc5e20e8161cfdcfc3d5dcd8300765ea2c12112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.171271    1740 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key
	I0918 11:52:39.287022    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt ...
	I0918 11:52:39.287027    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt: {Name:mk54c49c3c44ff09930e6c0f57238b89cff4c5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.287171    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key ...
	I0918 11:52:39.287173    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key: {Name:mk05faae5769358f82565f32c1f37a244f2478c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.287315    1740 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key
	I0918 11:52:39.287337    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt with IP's: []
	I0918 11:52:39.376234    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt ...
	I0918 11:52:39.376241    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: {Name:mkc8e654c6f2522197f557cb47d266f15eebaadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.376467    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key ...
	I0918 11:52:39.376471    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key: {Name:mkf345dd56f86115b31ecd965617f4c21d6a0cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.376571    1740 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969
	I0918 11:52:39.376580    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 11:52:39.429944    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 ...
	I0918 11:52:39.429952    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969: {Name:mkd69eb587bd0dc6ccdbaa88b78f4f92f2b47b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.430095    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969 ...
	I0918 11:52:39.430098    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969: {Name:mk76ea3a7fbbef2305f74e52afdf06cda921c8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.430199    1740 certs.go:337] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt
	I0918 11:52:39.430382    1740 certs.go:341] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key
	I0918 11:52:39.430499    1740 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key
	I0918 11:52:39.430509    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt with IP's: []
	I0918 11:52:39.698555    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt ...
	I0918 11:52:39.698563    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt: {Name:mk6d7a924ed10f0012b290ec4e0ea6bf1b7bfc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.698767    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key ...
	I0918 11:52:39.698773    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key: {Name:mk8d78e9179e4c57e4602e98d4fc6a37885b4d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.699037    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 11:52:39.699062    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem (1082 bytes)
	I0918 11:52:39.699081    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem (1123 bytes)
	I0918 11:52:39.699100    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem (1679 bytes)
	I0918 11:52:39.699418    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 11:52:39.707305    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 11:52:39.713956    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 11:52:39.720614    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 11:52:39.727643    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 11:52:39.734609    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 11:52:39.741305    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 11:52:39.748141    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 11:52:39.755277    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 11:52:39.762025    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 11:52:39.767701    1740 ssh_runner.go:195] Run: openssl version
	I0918 11:52:39.769871    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 11:52:39.772909    1740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.774536    1740 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.774555    1740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.776417    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 11:52:39.779698    1740 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 11:52:39.781162    1740 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 11:52:39.781200    1740 kubeadm.go:404] StartCluster: {Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:52:39.781263    1740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 11:52:39.787256    1740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 11:52:39.790167    1740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 11:52:39.792879    1740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 11:52:39.795890    1740 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 11:52:39.795906    1740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 11:52:39.820130    1740 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 11:52:39.820157    1740 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 11:52:39.874262    1740 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 11:52:39.874320    1740 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 11:52:39.874401    1740 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 11:52:39.936649    1740 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 11:52:39.946863    1740 out.go:204]   - Generating certificates and keys ...
	I0918 11:52:39.946906    1740 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 11:52:39.946940    1740 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 11:52:40.057135    1740 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 11:52:40.267412    1740 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 11:52:40.415260    1740 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 11:52:40.592293    1740 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 11:52:40.714190    1740 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 11:52:40.714252    1740 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-221000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 11:52:40.818329    1740 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 11:52:40.818397    1740 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-221000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 11:52:41.068370    1740 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 11:52:41.110794    1740 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 11:52:41.218301    1740 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 11:52:41.218335    1740 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 11:52:41.282421    1740 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 11:52:41.650315    1740 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 11:52:41.733907    1740 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 11:52:41.925252    1740 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 11:52:41.925561    1740 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 11:52:41.927413    1740 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 11:52:41.931680    1740 out.go:204]   - Booting up control plane ...
	I0918 11:52:41.931754    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 11:52:41.931794    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 11:52:41.931831    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 11:52:41.935171    1740 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 11:52:41.935565    1740 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 11:52:41.935586    1740 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 11:52:42.024365    1740 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 11:52:45.527756    1740 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.503453 seconds
	I0918 11:52:45.527852    1740 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 11:52:45.533290    1740 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 11:52:46.043984    1740 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 11:52:46.044088    1740 kubeadm.go:322] [mark-control-plane] Marking the node addons-221000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 11:52:46.548611    1740 kubeadm.go:322] [bootstrap-token] Using token: 0otx18.vbdfa1zgl84pbc1n
	I0918 11:52:46.552403    1740 out.go:204]   - Configuring RBAC rules ...
	I0918 11:52:46.552463    1740 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 11:52:46.553348    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 11:52:46.557357    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 11:52:46.558552    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 11:52:46.559879    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 11:52:46.560890    1740 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 11:52:46.567944    1740 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 11:52:46.739677    1740 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 11:52:46.956246    1740 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 11:52:46.958050    1740 kubeadm.go:322] 
	I0918 11:52:46.958085    1740 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 11:52:46.958096    1740 kubeadm.go:322] 
	I0918 11:52:46.958138    1740 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 11:52:46.958143    1740 kubeadm.go:322] 
	I0918 11:52:46.958157    1740 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 11:52:46.958186    1740 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 11:52:46.958221    1740 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 11:52:46.958226    1740 kubeadm.go:322] 
	I0918 11:52:46.958261    1740 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 11:52:46.958267    1740 kubeadm.go:322] 
	I0918 11:52:46.958291    1740 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 11:52:46.958294    1740 kubeadm.go:322] 
	I0918 11:52:46.958317    1740 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 11:52:46.958375    1740 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 11:52:46.958411    1740 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 11:52:46.958416    1740 kubeadm.go:322] 
	I0918 11:52:46.958458    1740 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 11:52:46.958503    1740 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 11:52:46.958507    1740 kubeadm.go:322] 
	I0918 11:52:46.958562    1740 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0otx18.vbdfa1zgl84pbc1n \
	I0918 11:52:46.958623    1740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 \
	I0918 11:52:46.958634    1740 kubeadm.go:322] 	--control-plane 
	I0918 11:52:46.958636    1740 kubeadm.go:322] 
	I0918 11:52:46.958676    1740 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 11:52:46.958681    1740 kubeadm.go:322] 
	I0918 11:52:46.958735    1740 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0otx18.vbdfa1zgl84pbc1n \
	I0918 11:52:46.958805    1740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 
	I0918 11:52:46.958862    1740 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 11:52:46.958868    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:46.958880    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:46.967096    1740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 11:52:46.970213    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 11:52:46.973479    1740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0918 11:52:46.977991    1740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 11:52:46.978031    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:46.978044    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=addons-221000 minikube.k8s.io/updated_at=2023_09_18T11_52_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.038282    1740 ops.go:34] apiserver oom_adj: -16
	I0918 11:52:47.038334    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.073908    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.620721    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:48.118781    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:48.620650    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:49.120663    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:49.620741    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:50.120032    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:50.620638    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:51.118777    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:51.618788    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:52.120234    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:52.619043    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:53.120606    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:53.619266    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:54.118979    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:54.618916    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:55.120620    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:55.620599    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:56.120585    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:56.618629    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:57.120587    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:57.618783    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:58.120645    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:58.620559    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:59.120556    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:59.619252    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:53:00.118796    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:53:00.169662    1740 kubeadm.go:1081] duration metric: took 13.191788791s to wait for elevateKubeSystemPrivileges.
	I0918 11:53:00.169677    1740 kubeadm.go:406] StartCluster complete in 20.38867075s
	I0918 11:53:00.169687    1740 settings.go:142] acquiring lock: {Name:mke420f28dda4f7a752738b3e6d571dc4216779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:53:00.169849    1740 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:53:00.170110    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/kubeconfig: {Name:mk07020c5b974cf07ca0cda25f72a521eb245fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:53:00.170308    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 11:53:00.170433    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:53:00.170379    1740 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0918 11:53:00.170512    1740 addons.go:69] Setting ingress=true in profile "addons-221000"
	I0918 11:53:00.170517    1740 addons.go:69] Setting ingress-dns=true in profile "addons-221000"
	I0918 11:53:00.170520    1740 addons.go:231] Setting addon ingress=true in "addons-221000"
	I0918 11:53:00.170523    1740 addons.go:231] Setting addon ingress-dns=true in "addons-221000"
	I0918 11:53:00.170533    1740 addons.go:69] Setting metrics-server=true in profile "addons-221000"
	I0918 11:53:00.170538    1740 addons.go:231] Setting addon metrics-server=true in "addons-221000"
	I0918 11:53:00.170552    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170556    1740 addons.go:69] Setting inspektor-gadget=true in profile "addons-221000"
	I0918 11:53:00.170560    1740 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-221000"
	I0918 11:53:00.170564    1740 addons.go:69] Setting gcp-auth=true in profile "addons-221000"
	I0918 11:53:00.170569    1740 mustload.go:65] Loading cluster: addons-221000
	I0918 11:53:00.170573    1740 addons.go:69] Setting registry=true in profile "addons-221000"
	I0918 11:53:00.170577    1740 addons.go:231] Setting addon registry=true in "addons-221000"
	I0918 11:53:00.170587    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170595    1740 addons.go:69] Setting default-storageclass=true in profile "addons-221000"
	I0918 11:53:00.170618    1740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-221000"
	I0918 11:53:00.170629    1740 addons.go:69] Setting storage-provisioner=true in profile "addons-221000"
	I0918 11:53:00.170639    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:53:00.170657    1740 addons.go:231] Setting addon storage-provisioner=true in "addons-221000"
	I0918 11:53:00.170705    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.170820    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.170825    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.170829    1740 addons.go:277] "addons-221000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0918 11:53:00.170831    1740 addons.go:277] "addons-221000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0918 11:53:00.170552    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170834    1740 addons.go:467] Verifying addon registry=true in "addons-221000"
	I0918 11:53:00.170556    1740 addons.go:69] Setting cloud-spanner=true in profile "addons-221000"
	I0918 11:53:00.175435    1740 out.go:177] * Verifying registry addon...
	I0918 11:53:00.170560    1740 addons.go:231] Setting addon inspektor-gadget=true in "addons-221000"
	I0918 11:53:00.170569    1740 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-221000"
	I0918 11:53:00.170865    1740 addons.go:231] Setting addon cloud-spanner=true in "addons-221000"
	I0918 11:53:00.170553    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170512    1740 addons.go:69] Setting volumesnapshots=true in profile "addons-221000"
	W0918 11:53:00.171149    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.171162    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	I0918 11:53:00.171465    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.188489    1740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 11:53:00.182591    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.182599    1740 addons.go:277] "addons-221000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0918 11:53:00.182603    1740 addons_storage_classes.go:55] "addons-221000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0918 11:53:00.182614    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.182623    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.182641    1740 addons.go:231] Setting addon volumesnapshots=true in "addons-221000"
	I0918 11:53:00.183250    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 11:53:00.196542    1740 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0918 11:53:00.192833    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.192879    1740 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 11:53:00.192896    1740 addons.go:467] Verifying addon ingress=true in "addons-221000"
	I0918 11:53:00.192903    1740 addons.go:231] Setting addon default-storageclass=true in "addons-221000"
	W0918 11:53:00.193202    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	I0918 11:53:00.196510    1740 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-221000" context rescaled to 1 replicas
	I0918 11:53:00.199509    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 11:53:00.199541    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 11:53:00.199554    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.199564    1740 addons.go:277] "addons-221000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0918 11:53:00.206530    1740 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0918 11:53:00.210356    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 11:53:00.210367    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.210376    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 11:53:00.210391    1740 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 11:53:00.211130    1740 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 11:53:00.213481    1740 out.go:177] * Verifying ingress addon...
	I0918 11:53:00.214506    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 11:53:00.214532    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.214630    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 11:53:00.218827    1740 out.go:177] * Verifying Kubernetes components...
	I0918 11:53:00.221560    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 11:53:00.218836    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.229444    1740 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 11:53:00.229456    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 11:53:00.217515    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 11:53:00.229466    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 11:53:00.229467    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.229472    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.220086    1740 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 11:53:00.221623    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 11:53:00.236499    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 11:53:00.234112    1740 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 11:53:00.242380    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 11:53:00.251491    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 11:53:00.252703    1740 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 11:53:00.260396    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 11:53:00.267323    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 11:53:00.274307    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 11:53:00.284458    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 11:53:00.287512    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 11:53:00.287521    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 11:53:00.287531    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.322513    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 11:53:00.322523    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 11:53:00.336060    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 11:53:00.340017    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 11:53:00.346686    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 11:53:00.346693    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 11:53:00.362466    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 11:53:00.362477    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 11:53:00.363923    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 11:53:00.363928    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 11:53:00.368153    1740 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 11:53:00.368161    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 11:53:00.376728    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 11:53:00.376740    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 11:53:00.395920    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 11:53:00.395931    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 11:53:00.403313    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 11:53:00.403320    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 11:53:00.404660    1740 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 11:53:00.404665    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 11:53:00.430003    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 11:53:00.430014    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 11:53:00.492665    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 11:53:00.492677    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 11:53:00.495149    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 11:53:00.495157    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 11:53:00.500095    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 11:53:00.522789    1740 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 11:53:00.522799    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 11:53:00.561433    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 11:53:00.561445    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 11:53:00.561578    1740 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:00.561584    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 11:53:00.579630    1740 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 11:53:00.579641    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 11:53:00.604202    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:00.607378    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 11:53:00.607389    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 11:53:00.624742    1740 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 11:53:00.624755    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 11:53:00.641432    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 11:53:00.641441    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 11:53:00.675716    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 11:53:00.675728    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 11:53:00.684962    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 11:53:00.684971    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 11:53:00.690643    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 11:53:00.690652    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 11:53:00.691887    1740 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 11:53:00.691893    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 11:53:00.701987    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 11:53:00.701999    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 11:53:00.719094    1740 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 11:53:00.719103    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0918 11:53:00.821368    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 11:53:00.824330    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 11:53:01.167637    1740 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0918 11:53:01.168100    1740 node_ready.go:35] waiting up to 6m0s for node "addons-221000" to be "Ready" ...
	I0918 11:53:01.169940    1740 node_ready.go:49] node "addons-221000" has status "Ready":"True"
	I0918 11:53:01.169963    1740 node_ready.go:38] duration metric: took 1.836333ms waiting for node "addons-221000" to be "Ready" ...
	I0918 11:53:01.169968    1740 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 11:53:01.172987    1740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:01.659785    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.159683334s)
	I0918 11:53:01.659805    1740 addons.go:467] Verifying addon metrics-server=true in "addons-221000"
	I0918 11:53:01.659835    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.055627667s)
	W0918 11:53:01.659861    1740 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 11:53:01.659882    1740 retry.go:31] will retry after 288.841008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 11:53:01.660144    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320131291s)
	I0918 11:53:01.949280    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:02.183526    1740 pod_ready.go:92] pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.183539    1740 pod_ready.go:81] duration metric: took 1.010553542s waiting for pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.183545    1740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.184891    1740 pod_ready.go:97] error getting pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z4cmf" not found
	I0918 11:53:02.184904    1740 pod_ready.go:81] duration metric: took 1.354709ms waiting for pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace to be "Ready" ...
	E0918 11:53:02.184909    1740 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z4cmf" not found
	I0918 11:53:02.184914    1740 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.187700    1740 pod_ready.go:92] pod "etcd-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.187709    1740 pod_ready.go:81] duration metric: took 2.791875ms waiting for pod "etcd-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.187714    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.190228    1740 pod_ready.go:92] pod "kube-apiserver-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.190236    1740 pod_ready.go:81] duration metric: took 2.518208ms waiting for pod "kube-apiserver-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.190240    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.571748    1740 pod_ready.go:92] pod "kube-controller-manager-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.571761    1740 pod_ready.go:81] duration metric: took 381.518375ms waiting for pod "kube-controller-manager-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.571765    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7gqn" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.823208    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.998874458s)
	I0918 11:53:02.823230    1740 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-221000"
	I0918 11:53:02.828329    1740 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 11:53:02.838784    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 11:53:02.842760    1740 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 11:53:02.842767    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:02.847417    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:02.972129    1740 pod_ready.go:92] pod "kube-proxy-q7gqn" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.972138    1740 pod_ready.go:81] duration metric: took 400.3735ms waiting for pod "kube-proxy-q7gqn" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.972143    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:03.351863    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:03.371796    1740 pod_ready.go:92] pod "kube-scheduler-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:03.371806    1740 pod_ready.go:81] duration metric: took 399.662875ms waiting for pod "kube-scheduler-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:03.371810    1740 pod_ready.go:38] duration metric: took 2.201856875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 11:53:03.371820    1740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 11:53:03.371875    1740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 11:53:03.851785    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:04.352157    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:04.679993    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73071775s)
	I0918 11:53:04.680002    1740 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.308127625s)
	I0918 11:53:04.680021    1740 api_server.go:72] duration metric: took 4.46545525s to wait for apiserver process to appear ...
	I0918 11:53:04.680025    1740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 11:53:04.680031    1740 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0918 11:53:04.683995    1740 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0918 11:53:04.684750    1740 api_server.go:141] control plane version: v1.28.2
	I0918 11:53:04.684756    1740 api_server.go:131] duration metric: took 4.728917ms to wait for apiserver health ...
	I0918 11:53:04.684760    1740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 11:53:04.689157    1740 system_pods.go:59] 13 kube-system pods found
	I0918 11:53:04.689166    1740 system_pods.go:61] "coredns-5dd5756b68-mbgns" [376db80e-bef7-49a8-805c-d250bbb5ddc5] Running
	I0918 11:53:04.689171    1740 system_pods.go:61] "csi-hostpath-attacher-0" [b3eb2340-f156-4127-b844-79013849b5d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 11:53:04.689175    1740 system_pods.go:61] "csi-hostpath-resizer-0" [a39fc7e1-21e6-43e0-8a71-76fc3122aa67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 11:53:04.689182    1740 system_pods.go:61] "csi-hostpathplugin-s878j" [f7db805b-46b1-4d4f-b620-4bb9732a0ba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 11:53:04.689185    1740 system_pods.go:61] "etcd-addons-221000" [4500c338-1b0c-4d39-b5b6-76d42cf285f5] Running
	I0918 11:53:04.689188    1740 system_pods.go:61] "kube-apiserver-addons-221000" [93a9cc2e-3463-4332-a37c-86437106ed5e] Running
	I0918 11:53:04.689190    1740 system_pods.go:61] "kube-controller-manager-addons-221000" [eaeef50f-2b51-43b2-91c9-a4f97f5460ae] Running
	I0918 11:53:04.689193    1740 system_pods.go:61] "kube-proxy-q7gqn" [e971c33c-7d1b-47b7-9ff5-3a629f12fb57] Running
	I0918 11:53:04.689195    1740 system_pods.go:61] "kube-scheduler-addons-221000" [9d39af63-6f0a-4855-9435-f8e7af26869e] Running
	I0918 11:53:04.689199    1740 system_pods.go:61] "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 11:53:04.689205    1740 system_pods.go:61] "snapshot-controller-58dbcc7b99-89j9m" [d9a96c2a-2231-4dea-abbf-16875dd2b1d7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.689210    1740 system_pods.go:61] "snapshot-controller-58dbcc7b99-xwwxn" [5a80446c-a3a8-4ce7-8ac4-2894087691fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.689214    1740 system_pods.go:61] "storage-provisioner" [88ff8527-97e2-4317-8d5b-a2502e8cb7f7] Running
	I0918 11:53:04.689217    1740 system_pods.go:74] duration metric: took 4.45475ms to wait for pod list to return data ...
	I0918 11:53:04.689220    1740 default_sa.go:34] waiting for default service account to be created ...
	I0918 11:53:04.690951    1740 default_sa.go:45] found service account: "default"
	I0918 11:53:04.690958    1740 default_sa.go:55] duration metric: took 1.736083ms for default service account to be created ...
	I0918 11:53:04.690961    1740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 11:53:04.694962    1740 system_pods.go:86] 13 kube-system pods found
	I0918 11:53:04.694971    1740 system_pods.go:89] "coredns-5dd5756b68-mbgns" [376db80e-bef7-49a8-805c-d250bbb5ddc5] Running
	I0918 11:53:04.694976    1740 system_pods.go:89] "csi-hostpath-attacher-0" [b3eb2340-f156-4127-b844-79013849b5d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 11:53:04.694979    1740 system_pods.go:89] "csi-hostpath-resizer-0" [a39fc7e1-21e6-43e0-8a71-76fc3122aa67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 11:53:04.694983    1740 system_pods.go:89] "csi-hostpathplugin-s878j" [f7db805b-46b1-4d4f-b620-4bb9732a0ba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 11:53:04.694986    1740 system_pods.go:89] "etcd-addons-221000" [4500c338-1b0c-4d39-b5b6-76d42cf285f5] Running
	I0918 11:53:04.694988    1740 system_pods.go:89] "kube-apiserver-addons-221000" [93a9cc2e-3463-4332-a37c-86437106ed5e] Running
	I0918 11:53:04.694990    1740 system_pods.go:89] "kube-controller-manager-addons-221000" [eaeef50f-2b51-43b2-91c9-a4f97f5460ae] Running
	I0918 11:53:04.694994    1740 system_pods.go:89] "kube-proxy-q7gqn" [e971c33c-7d1b-47b7-9ff5-3a629f12fb57] Running
	I0918 11:53:04.694996    1740 system_pods.go:89] "kube-scheduler-addons-221000" [9d39af63-6f0a-4855-9435-f8e7af26869e] Running
	I0918 11:53:04.694999    1740 system_pods.go:89] "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 11:53:04.695003    1740 system_pods.go:89] "snapshot-controller-58dbcc7b99-89j9m" [d9a96c2a-2231-4dea-abbf-16875dd2b1d7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.695006    1740 system_pods.go:89] "snapshot-controller-58dbcc7b99-xwwxn" [5a80446c-a3a8-4ce7-8ac4-2894087691fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.695009    1740 system_pods.go:89] "storage-provisioner" [88ff8527-97e2-4317-8d5b-a2502e8cb7f7] Running
	I0918 11:53:04.695012    1740 system_pods.go:126] duration metric: took 4.049ms to wait for k8s-apps to be running ...
	I0918 11:53:04.695014    1740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 11:53:04.695074    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 11:53:04.700754    1740 system_svc.go:56] duration metric: took 5.736541ms WaitForService to wait for kubelet.
	I0918 11:53:04.700761    1740 kubeadm.go:581] duration metric: took 4.486195916s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 11:53:04.700771    1740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 11:53:04.702224    1740 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0918 11:53:04.702234    1740 node_conditions.go:123] node cpu capacity is 2
	I0918 11:53:04.702239    1740 node_conditions.go:105] duration metric: took 1.466083ms to run NodePressure ...
	I0918 11:53:04.702244    1740 start.go:228] waiting for startup goroutines ...
	I0918 11:53:04.851794    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:05.352908    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:05.851843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.351684    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.787954    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 11:53:06.787972    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:06.819465    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 11:53:06.824867    1740 addons.go:231] Setting addon gcp-auth=true in "addons-221000"
	I0918 11:53:06.824887    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:06.825624    1740 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 11:53:06.825631    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:06.851875    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.860327    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 11:53:06.868318    1740 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0918 11:53:06.871302    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 11:53:06.871308    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 11:53:06.876186    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 11:53:06.876191    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 11:53:06.881437    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 11:53:06.881443    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0918 11:53:06.887709    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 11:53:07.124491    1740 addons.go:467] Verifying addon gcp-auth=true in "addons-221000"
	I0918 11:53:07.129030    1740 out.go:177] * Verifying gcp-auth addon...
	I0918 11:53:07.137330    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 11:53:07.139319    1740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 11:53:07.139325    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.142056    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.351786    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:07.647787    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.851746    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:08.144923    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:08.351959    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:08.646023    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:08.851939    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:09.146053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:09.352896    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:09.645776    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:09.851792    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:10.146843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:10.351428    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:10.645894    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:10.852236    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:11.145951    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:11.352216    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:11.645774    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:11.852232    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:12.145929    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:12.355119    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:12.840629    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:12.851603    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:13.145852    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:13.351962    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:13.646245    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:13.852112    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:14.144894    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:14.351204    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:14.646046    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.059939    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:15.145929    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.351906    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:15.646033    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.852022    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:16.145893    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:16.351976    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:16.645740    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:16.852007    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:17.145547    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:17.352003    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:17.646147    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:17.853011    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:18.145960    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:18.353519    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:18.646257    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:18.851829    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:19.143968    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:19.351778    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:19.645563    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:19.851678    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:20.145637    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:20.351425    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:20.645727    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:20.852080    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:21.145053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:21.352010    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:21.645631    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:21.851983    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:22.146102    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:22.351559    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:22.645364    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:22.851995    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:23.145511    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:23.351664    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:23.646427    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:23.851813    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:24.144653    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:24.351670    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:24.645732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:24.851659    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:25.145755    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:25.350286    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:25.645572    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:25.851968    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:26.145692    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:26.352042    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:26.645575    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:26.852307    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:27.145498    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:27.352114    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:27.645719    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:27.851965    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:28.145987    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:28.351772    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:28.645594    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:28.851993    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:29.145747    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:29.351502    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:29.645697    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:29.851953    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:30.145495    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:30.351423    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:30.645330    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:30.852142    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:31.145422    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:31.352025    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:31.645572    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:31.852021    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:32.145742    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:32.351596    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:32.645761    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:32.851846    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:33.145666    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:33.351607    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:33.646262    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:33.852115    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:34.145556    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:34.353136    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:34.644719    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:34.852049    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:35.145343    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:35.351551    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:35.645361    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:35.851476    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:36.143766    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:36.351679    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:36.645788    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:36.851584    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:37.145933    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:37.351445    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:37.646034    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:37.851557    1740 kapi.go:107] duration metric: took 35.013102459s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 11:53:38.145661    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:38.645759    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:39.145382    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:39.645858    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:40.145851    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:40.645842    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:41.145511    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:41.646176    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:42.145937    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:42.645521    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:43.145380    1740 kapi.go:107] duration metric: took 36.008387458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 11:53:43.148984    1740 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-221000 cluster.
	I0918 11:53:43.152958    1740 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 11:53:43.156871    1740 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 11:59:00.190235    1740 kapi.go:107] duration metric: took 6m0.011650291s to wait for kubernetes.io/minikube-addons=registry ...
	W0918 11:59:00.190354    1740 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0918 11:59:00.236861    1740 kapi.go:107] duration metric: took 6m0.007418791s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0918 11:59:00.236887    1740 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0918 11:59:00.245622    1740 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, inspektor-gadget, volumesnapshots, csi-hostpath-driver, gcp-auth
	I0918 11:59:00.252559    1740 addons.go:502] enable addons completed in 6m0.086876584s: enabled=[ingress-dns cloud-spanner default-storageclass metrics-server storage-provisioner inspektor-gadget volumesnapshots csi-hostpath-driver gcp-auth]
	I0918 11:59:00.252570    1740 start.go:233] waiting for cluster config update ...
	I0918 11:59:00.252577    1740 start.go:242] writing updated cluster config ...
	I0918 11:59:00.253046    1740 ssh_runner.go:195] Run: rm -f paused
	I0918 11:59:00.282509    1740 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0918 11:59:00.285533    1740 out.go:177] * Done! kubectl is now configured to use "addons-221000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-18 18:52:28 UTC, ends at Mon 2023-09-18 19:11:00 UTC. --
	Sep 18 18:53:31 addons-221000 dockerd[1106]: time="2023-09-18T18:53:31.913045758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 18:53:31 addons-221000 dockerd[1106]: time="2023-09-18T18:53:31.913052008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:32 addons-221000 dockerd[1100]: time="2023-09-18T18:53:32.000341717Z" level=warning msg="reference for unknown type: " digest="sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8" remote="registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 18 18:53:34 addons-221000 cri-dockerd[995]: time="2023-09-18T18:53:34Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8"
	Sep 18 18:53:34 addons-221000 dockerd[1106]: time="2023-09-18T18:53:34.455925540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 18:53:34 addons-221000 dockerd[1106]: time="2023-09-18T18:53:34.456111241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:34 addons-221000 dockerd[1106]: time="2023-09-18T18:53:34.456126491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 18:53:34 addons-221000 dockerd[1106]: time="2023-09-18T18:53:34.456165406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:34 addons-221000 dockerd[1100]: time="2023-09-18T18:53:34.546723654Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 18 18:53:36 addons-221000 cri-dockerd[995]: time="2023-09-18T18:53:36Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
	Sep 18 18:53:36 addons-221000 dockerd[1106]: time="2023-09-18T18:53:36.766332627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 18:53:36 addons-221000 dockerd[1106]: time="2023-09-18T18:53:36.766362584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:36 addons-221000 dockerd[1106]: time="2023-09-18T18:53:36.766375125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 18:53:36 addons-221000 dockerd[1106]: time="2023-09-18T18:53:36.766381709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:40 addons-221000 dockerd[1106]: time="2023-09-18T18:53:40.152856514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 18:53:40 addons-221000 dockerd[1106]: time="2023-09-18T18:53:40.152909804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:40 addons-221000 dockerd[1106]: time="2023-09-18T18:53:40.152924220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 18:53:40 addons-221000 dockerd[1106]: time="2023-09-18T18:53:40.152934887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:40 addons-221000 cri-dockerd[995]: time="2023-09-18T18:53:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e321131e2d88d58235861ac8364e2e9ea1e94ae5e42fd565fa9e65fa17e39a30/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 18 18:53:40 addons-221000 dockerd[1100]: time="2023-09-18T18:53:40.332879779Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 18 18:53:41 addons-221000 cri-dockerd[995]: time="2023-09-18T18:53:41Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Sep 18 18:53:42 addons-221000 dockerd[1106]: time="2023-09-18T18:53:42.114894841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 18:53:42 addons-221000 dockerd[1106]: time="2023-09-18T18:53:42.115057171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 18:53:42 addons-221000 dockerd[1106]: time="2023-09-18T18:53:42.115064546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 18:53:42 addons-221000 dockerd[1106]: time="2023-09-18T18:53:42.115841863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID
	6009e365438d8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 17 minutes ago      Running             gcp-auth                                 0                   e321131e2d88d
	3bb40d34efc29       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 minutes ago      Running             csi-snapshotter                          0                   2dc44d50472a8
	503c9a6a65b7d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          17 minutes ago      Running             csi-provisioner                          0                   2dc44d50472a8
	d722c62039b0b       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            17 minutes ago      Running             liveness-probe                           0                   2dc44d50472a8
	ddc75a67ab827       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           17 minutes ago      Running             hostpath                                 0                   2dc44d50472a8
	6b4de2f3acf09       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                17 minutes ago      Running             node-driver-registrar                    0                   2dc44d50472a8
	70e58783f3748       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              17 minutes ago      Running             csi-resizer                              0                   5ec9976a9ac7e
	2c2e40482e98d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   17 minutes ago      Running             csi-external-health-monitor-controller   0                   2dc44d50472a8
	d0bd02275ae05       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             17 minutes ago      Running             csi-attacher                             0                   b64a9c9d34ad2
	afd4040b11d6e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7                            17 minutes ago      Running             gadget                                   0                   a7e6c06852560
	ab33a0f8756c5       registry.k8s.io/metrics-server/metrics-server@sha256:ee4304963fb035239bb5c5e8c10f2f38ee80efc16ecbdb9feb7213c17ae2e86e                        17 minutes ago      Running             metrics-server                           0                   b759b3bc87e67
	68cb499e8a4ea       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      17 minutes ago      Running             volume-snapshot-controller               0                   322c06e1ffed3
	db4658d155f25       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      17 minutes ago      Running             volume-snapshot-controller               0                   f31f590bd0703
	96b1e19d34d0f       ba04bb24b9575                                                                                                                                17 minutes ago      Running             storage-provisioner                      0                   31030bebae5dd
	3397ed73112e1       97e04611ad434                                                                                                                                18 minutes ago      Running             coredns                                  0                   ecf214ed85c34
	3b8b236037bf7       7da62c127fc0f                                                                                                                                18 minutes ago      Running             kube-proxy                               0                   efd5b0f304a7c
	17d16f9191cb9       64fc40cee3716                                                                                                                                18 minutes ago      Running             kube-scheduler                           0                   6dcf2ed48fa0d
	cb85fb8fd00cf       89d57b83c1786                                                                                                                                18 minutes ago      Running             kube-controller-manager                  0                   d0674da883f4f
	4a8eb16a561d8       30bb499447fe1                                                                                                                                18 minutes ago      Running             kube-apiserver                           0                   2a8be15cc8448
	2a7ec1fe69df2       9cdd6470f48c8                                                                                                                                18 minutes ago      Running             etcd                                     0                   0f317e031c491
	
	* 
	* ==> coredns [3397ed73112e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51523 - 34394 "HINFO IN 2890994018648264973.7586751697303130781. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046989834s
	[INFO] 10.244.0.11:51746 - 24273 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125498s
	[INFO] 10.244.0.11:38492 - 22517 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252786s
	[INFO] 10.244.0.11:44956 - 48338 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000035291s
	[INFO] 10.244.0.11:34979 - 63104 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059957s
	[INFO] 10.244.0.11:38551 - 60811 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000037916s
	[INFO] 10.244.0.11:44454 - 13651 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000042082s
	[INFO] 10.244.0.11:41401 - 32727 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001047395s
	[INFO] 10.244.0.11:40924 - 8451 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001040478s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-221000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-221000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=addons-221000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T11_52_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-221000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-221000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 18:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-221000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:10:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:09:09 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:09:09 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:09:09 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:09:09 +0000   Mon, 18 Sep 2023 18:52:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-221000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 605bc2fc72a045ae88e907db961da3d3
	  System UUID:                605bc2fc72a045ae88e907db961da3d3
	  Boot ID:                    6a9990c2-fe5e-48d8-97ca-ea50d8c8e3b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-2qmph                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  gcp-auth                    gcp-auth-d4c87556c-2vm8d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5dd5756b68-mbgns                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 csi-hostpathplugin-s878j                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-addons-221000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-221000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-221000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-q7gqn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-221000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-7c66d45ddc-ph4qt          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 snapshot-controller-58dbcc7b99-89j9m     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 snapshot-controller-58dbcc7b99-xwwxn     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-221000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-221000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-221000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m   kubelet          Node addons-221000 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node addons-221000 event: Registered Node addons-221000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.641705] EINJ: EINJ table not found.
	[  +0.512113] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044199] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000795] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.076142] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.068934] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.415221] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.169313] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.075007] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.084392] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.144805] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.094980] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[  +0.076494] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +0.074272] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.076208] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.088287] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.538937] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
	[  +2.473246] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.092849] systemd-fstab-generator[1411]: Ignoring "noauto" for root device
	[  +4.633357] systemd-fstab-generator[2291]: Ignoring "noauto" for root device
	[Sep18 18:53] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.646092] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.523270] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.548658] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [2a7ec1fe69df] <==
	* {"level":"info","ts":"2023-09-18T18:52:43.020227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T18:52:43.020734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-18T18:52:43.023404Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.023573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T18:52:43.02399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-18T18:52:43.024241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T18:52:43.02431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-18T18:52:43.024433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.024491Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.025343Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:53:12.839558Z","caller":"traceutil/trace.go:171","msg":"trace[499305584] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:692; }","duration":"193.895541ms","start":"2023-09-18T18:53:12.645653Z","end":"2023-09-18T18:53:12.839549Z","steps":["trace[499305584] 'read index received'  (duration: 193.791791ms)","trace[499305584] 'applied index is now lower than readState.Index'  (duration: 103.417µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:53:12.839607Z","caller":"traceutil/trace.go:171","msg":"trace[593448589] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"309.762625ms","start":"2023-09-18T18:53:12.529838Z","end":"2023-09-18T18:53:12.839601Z","steps":["trace[593448589] 'process raft request'  (duration: 309.63075ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:53:12.839655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.989167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10532"}
	{"level":"info","ts":"2023-09-18T18:53:12.839671Z","caller":"traceutil/trace.go:171","msg":"trace[1312627899] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:671; }","duration":"194.031ms","start":"2023-09-18T18:53:12.645637Z","end":"2023-09-18T18:53:12.839668Z","steps":["trace[1312627899] 'agreement among raft nodes before linearized reading'  (duration: 193.963833ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:53:12.8398Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-18T18:53:12.529832Z","time spent":"309.7915ms","remote":"127.0.0.1:54556","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:669 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-18T18:53:15.058529Z","caller":"traceutil/trace.go:171","msg":"trace[42488278] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:693; }","duration":"207.837416ms","start":"2023-09-18T18:53:14.850683Z","end":"2023-09-18T18:53:15.05852Z","steps":["trace[42488278] 'read index received'  (duration: 207.76525ms)","trace[42488278] 'applied index is now lower than readState.Index'  (duration: 71.708µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:53:15.058651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.966125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:13 size:63849"}
	{"level":"info","ts":"2023-09-18T18:53:15.058696Z","caller":"traceutil/trace.go:171","msg":"trace[2008509018] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:13; response_revision:672; }","duration":"208.018666ms","start":"2023-09-18T18:53:14.850673Z","end":"2023-09-18T18:53:15.058691Z","steps":["trace[2008509018] 'agreement among raft nodes before linearized reading'  (duration: 207.886666ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:53:15.058824Z","caller":"traceutil/trace.go:171","msg":"trace[1590636669] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"214.775084ms","start":"2023-09-18T18:53:14.844046Z","end":"2023-09-18T18:53:15.058821Z","steps":["trace[1590636669] 'process raft request'  (duration: 214.424584ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T19:02:43.443637Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2023-09-18T19:02:43.458272Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1094,"took":"14.26046ms","hash":1108450172}
	{"level":"info","ts":"2023-09-18T19:02:43.458292Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1108450172,"revision":1094,"compact-revision":-1}
	{"level":"info","ts":"2023-09-18T19:07:43.446092Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1454}
	{"level":"info","ts":"2023-09-18T19:07:43.44678Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1454,"took":"494.332µs","hash":2695871163}
	{"level":"info","ts":"2023-09-18T19:07:43.446796Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2695871163,"revision":1454,"compact-revision":1094}
	
	* 
	* ==> gcp-auth [6009e365438d] <==
	* 2023/09/18 18:53:42 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  19:11:00 up 18 min,  0 users,  load average: 0.06, 0.14, 0.16
	Linux addons-221000 5.10.57 #1 SMP PREEMPT Fri Sep 15 19:03:18 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4a8eb16a561d] <==
	* I0918 19:01:44.199428       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:02:44.199298       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:02:44.256077       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:02:44.256094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:02:44.256307       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:02:44.256319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:02:44.256347       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0918 19:02:44.267692       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:03:44.200100       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:04:44.199686       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:05:44.199355       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:06:44.199779       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:07:44.199253       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:07:44.256724       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:07:44.256790       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:07:44.256927       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:07:44.256967       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:07:44.257034       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:07:44.257066       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:07:44.257262       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0918 19:07:44.288772       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:08:44.199786       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0918 19:09:04.182369       1 watcher.go:245] watch chan error: etcdserver: mvcc: required revision has been compacted
	I0918 19:09:44.199278       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:10:44.200176       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [cb85fb8fd00c] <==
	* I0918 18:53:29.248178       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0918 18:53:29.348997       1 shared_informer.go:318] Caches are synced for resource quota
	I0918 18:53:29.586694       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:53:29.602966       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:29.662572       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0918 18:53:29.707847       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:53:29.763726       1 shared_informer.go:318] Caches are synced for garbage collector
	I0918 18:53:30.588673       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:53:30.591170       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:53:30.592513       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:53:30.592695       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0918 18:53:30.619905       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:30.641992       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:53:30.699064       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:31.622753       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:31.625237       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:31.627760       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:31.627944       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0918 18:53:31.655550       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:53:42.736705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="4.14133ms"
	I0918 18:53:42.736793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="11.667µs"
	I0918 18:54:00.007554       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:54:00.025814       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:54:01.003894       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:54:01.011510       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	
	* 
	* ==> kube-proxy [3b8b236037bf] <==
	* I0918 18:53:00.480730       1 server_others.go:69] "Using iptables proxy"
	I0918 18:53:00.488175       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0918 18:53:00.508601       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0918 18:53:00.508621       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 18:53:00.509405       1 server_others.go:152] "Using iptables Proxier"
	I0918 18:53:00.509431       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 18:53:00.509518       1 server.go:846] "Version info" version="v1.28.2"
	I0918 18:53:00.509524       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 18:53:00.512081       1 config.go:188] "Starting service config controller"
	I0918 18:53:00.512089       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 18:53:00.512105       1 config.go:97] "Starting endpoint slice config controller"
	I0918 18:53:00.512107       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 18:53:00.512284       1 config.go:315] "Starting node config controller"
	I0918 18:53:00.512287       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 18:53:01.015326       1 shared_informer.go:318] Caches are synced for node config
	I0918 18:53:01.015354       1 shared_informer.go:318] Caches are synced for service config
	I0918 18:53:01.015384       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [17d16f9191cb] <==
	* W0918 18:52:43.862760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:52:43.863344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 18:52:43.862808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 18:52:43.863392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 18:52:43.862823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 18:52:43.863442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 18:52:43.862835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 18:52:43.863451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0918 18:52:43.862850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:52:43.863466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:52:43.862867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 18:52:43.863530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0918 18:52:43.862878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 18:52:43.863565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0918 18:52:43.862692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 18:52:43.863575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 18:52:43.863300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 18:52:43.863661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 18:52:44.751836       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:52:44.751859       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:52:44.752492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 18:52:44.752505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0918 18:52:44.760405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 18:52:44.760417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0918 18:52:46.760908       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-18 18:52:28 UTC, ends at Mon 2023-09-18 19:11:00 UTC. --
	Sep 18 19:05:46 addons-221000 kubelet[2297]: E0918 19:05:46.790235    2297 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 18 19:05:46 addons-221000 kubelet[2297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:05:46 addons-221000 kubelet[2297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:05:46 addons-221000 kubelet[2297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 19:06:46 addons-221000 kubelet[2297]: E0918 19:06:46.789994    2297 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 18 19:06:46 addons-221000 kubelet[2297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:06:46 addons-221000 kubelet[2297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:06:46 addons-221000 kubelet[2297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 19:07:46 addons-221000 kubelet[2297]: E0918 19:07:46.790185    2297 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 18 19:07:46 addons-221000 kubelet[2297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:07:46 addons-221000 kubelet[2297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:07:46 addons-221000 kubelet[2297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 19:07:46 addons-221000 kubelet[2297]: W0918 19:07:46.810475    2297 machine.go:65] Cannot read vendor id correctly, set empty.
	Sep 18 19:08:46 addons-221000 kubelet[2297]: E0918 19:08:46.790690    2297 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 18 19:08:46 addons-221000 kubelet[2297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:08:46 addons-221000 kubelet[2297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:08:46 addons-221000 kubelet[2297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 19:09:46 addons-221000 kubelet[2297]: E0918 19:09:46.790177    2297 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 18 19:09:46 addons-221000 kubelet[2297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:09:46 addons-221000 kubelet[2297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:09:46 addons-221000 kubelet[2297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 19:10:46 addons-221000 kubelet[2297]: E0918 19:10:46.790244    2297 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 18 19:10:46 addons-221000 kubelet[2297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 19:10:46 addons-221000 kubelet[2297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 19:10:46 addons-221000 kubelet[2297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [96b1e19d34d0] <==
	* I0918 18:53:02.494940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 18:53:02.503242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 18:53:02.503674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 18:53:02.507046       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 18:53:02.507107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9!
	I0918 18:53:02.507893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"993ff549-fac0-4a25-b8bc-6e13c7f3eb70", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9 became leader
	I0918 18:53:02.607443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-221000 -n addons-221000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-221000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-221000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-221000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (35.61225ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-221000 -n addons-221000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-221000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:51 PDT |                     |
	|         | -p download-only-242000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |                     |
	|         | -p download-only-242000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| delete  | -p download-only-242000        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| delete  | -p download-only-242000        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| start   | --download-only -p             | binary-mirror-077000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |                     |
	|         | binary-mirror-077000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49414         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-077000        | binary-mirror-077000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| start   | -p addons-221000               | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:59 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT |                     |
	|         | addons-221000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT | 18 Sep 23 12:11 PDT |
	|         | -p addons-221000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-221000 addons           | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT | 18 Sep 23 12:12 PDT |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-221000 addons           | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:12 PDT | 18 Sep 23 12:12 PDT |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-221000 addons           | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:12 PDT | 18 Sep 23 12:12 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:12 PDT | 18 Sep 23 12:12 PDT |
	|         | addons-221000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 11:52:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 11:52:16.711602    1740 out.go:296] Setting OutFile to fd 1 ...
	I0918 11:52:16.711748    1740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:16.711751    1740 out.go:309] Setting ErrFile to fd 2...
	I0918 11:52:16.711753    1740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:16.711880    1740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 11:52:16.712918    1740 out.go:303] Setting JSON to false
	I0918 11:52:16.728001    1740 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1310,"bootTime":1695061826,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 11:52:16.728087    1740 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 11:52:16.732378    1740 out.go:177] * [addons-221000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 11:52:16.739454    1740 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 11:52:16.743421    1740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:52:16.739507    1740 notify.go:220] Checking for updates...
	I0918 11:52:16.749403    1740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 11:52:16.752377    1740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 11:52:16.755381    1740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 11:52:16.758417    1740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 11:52:16.761446    1740 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 11:52:16.765371    1740 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 11:52:16.777355    1740 start.go:298] selected driver: qemu2
	I0918 11:52:16.777364    1740 start.go:902] validating driver "qemu2" against <nil>
	I0918 11:52:16.777372    1740 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 11:52:16.779390    1740 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 11:52:16.782385    1740 out.go:177] * Automatically selected the socket_vmnet network
	I0918 11:52:16.785462    1740 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 11:52:16.785488    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:16.785496    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:16.785507    1740 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 11:52:16.785513    1740 start_flags.go:321] config:
	{Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0918 11:52:16.789634    1740 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 11:52:16.798394    1740 out.go:177] * Starting control plane node addons-221000 in cluster addons-221000
	I0918 11:52:16.802195    1740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:16.802217    1740 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 11:52:16.802234    1740 cache.go:57] Caching tarball of preloaded images
	I0918 11:52:16.802301    1740 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 11:52:16.802315    1740 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 11:52:16.802542    1740 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json ...
	I0918 11:52:16.802555    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json: {Name:mk6624c585fbc7911138df2cd59d1f2e10251cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:16.802799    1740 start.go:365] acquiring machines lock for addons-221000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 11:52:16.802873    1740 start.go:369] acquired machines lock for "addons-221000" in 68.417µs
	I0918 11:52:16.802886    1740 start.go:93] Provisioning new machine with config: &{Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 11:52:16.802925    1740 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 11:52:16.810242    1740 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 11:52:17.161676    1740 start.go:159] libmachine.API.Create for "addons-221000" (driver="qemu2")
	I0918 11:52:17.161722    1740 client.go:168] LocalClient.Create starting
	I0918 11:52:17.161932    1740 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 11:52:17.253776    1740 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 11:52:17.312301    1740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 11:52:17.776256    1740 main.go:141] libmachine: Creating SSH key...
	I0918 11:52:17.897328    1740 main.go:141] libmachine: Creating Disk image...
	I0918 11:52:17.897334    1740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 11:52:17.897524    1740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:17.933044    1740 main.go:141] libmachine: STDOUT: 
	I0918 11:52:17.933072    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:17.933136    1740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2 +20000M
	I0918 11:52:17.940597    1740 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 11:52:17.940609    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:17.940623    1740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:17.940628    1740 main.go:141] libmachine: Starting QEMU VM...
	I0918 11:52:17.940657    1740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ae:e8:0a:fd:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:18.008779    1740 main.go:141] libmachine: STDOUT: 
	I0918 11:52:18.008804    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:18.008808    1740 main.go:141] libmachine: Attempt 0
	I0918 11:52:18.008820    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:20.011046    1740 main.go:141] libmachine: Attempt 1
	I0918 11:52:20.011124    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:22.013479    1740 main.go:141] libmachine: Attempt 2
	I0918 11:52:22.013559    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:24.015632    1740 main.go:141] libmachine: Attempt 3
	I0918 11:52:24.015645    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:26.017675    1740 main.go:141] libmachine: Attempt 4
	I0918 11:52:26.017681    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:28.018761    1740 main.go:141] libmachine: Attempt 5
	I0918 11:52:28.018782    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:30.020886    1740 main.go:141] libmachine: Attempt 6
	I0918 11:52:30.020920    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:30.021070    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 11:52:30.021123    1740 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6509edec}
	I0918 11:52:30.021130    1740 main.go:141] libmachine: Found match: ce:ae:e8:a:fd:16
	I0918 11:52:30.021140    1740 main.go:141] libmachine: IP: 192.168.105.2
	I0918 11:52:30.021147    1740 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0918 11:52:31.026067    1740 machine.go:88] provisioning docker machine ...
	I0918 11:52:31.026085    1740 buildroot.go:166] provisioning hostname "addons-221000"
	I0918 11:52:31.026964    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.027231    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.027237    1740 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-221000 && echo "addons-221000" | sudo tee /etc/hostname
	I0918 11:52:31.084404    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-221000
	
	I0918 11:52:31.084473    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.084732    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.084740    1740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-221000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-221000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-221000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 11:52:31.144009    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 11:52:31.144022    1740 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17263-1251/.minikube CaCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17263-1251/.minikube}
	I0918 11:52:31.144033    1740 buildroot.go:174] setting up certificates
	I0918 11:52:31.144038    1740 provision.go:83] configureAuth start
	I0918 11:52:31.144042    1740 provision.go:138] copyHostCerts
	I0918 11:52:31.144138    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem (1082 bytes)
	I0918 11:52:31.144342    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem (1123 bytes)
	I0918 11:52:31.144435    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem (1679 bytes)
	I0918 11:52:31.144503    1740 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem org=jenkins.addons-221000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-221000]
	I0918 11:52:31.225327    1740 provision.go:172] copyRemoteCerts
	I0918 11:52:31.225385    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 11:52:31.225394    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.256352    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 11:52:31.263330    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 11:52:31.270197    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 11:52:31.276715    1740 provision.go:86] duration metric: configureAuth took 132.670667ms
	I0918 11:52:31.276723    1740 buildroot.go:189] setting minikube options for container-runtime
	I0918 11:52:31.276820    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:52:31.276857    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.277075    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.277080    1740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 11:52:31.337901    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 11:52:31.337910    1740 buildroot.go:70] root file system type: tmpfs
	I0918 11:52:31.337970    1740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 11:52:31.338012    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.338275    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.338315    1740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 11:52:31.400816    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 11:52:31.400863    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.401116    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.401126    1740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 11:52:31.746670    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 11:52:31.746684    1740 machine.go:91] provisioned docker machine in 720.613ms
	I0918 11:52:31.746690    1740 client.go:171] LocalClient.Create took 14.585099291s
	I0918 11:52:31.746703    1740 start.go:167] duration metric: libmachine.API.Create for "addons-221000" took 14.585173417s
	I0918 11:52:31.746707    1740 start.go:300] post-start starting for "addons-221000" (driver="qemu2")
	I0918 11:52:31.746711    1740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 11:52:31.746780    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 11:52:31.746790    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.775601    1740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 11:52:31.776975    1740 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 11:52:31.776983    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/addons for local assets ...
	I0918 11:52:31.777055    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/files for local assets ...
	I0918 11:52:31.777083    1740 start.go:303] post-start completed in 30.374292ms
	I0918 11:52:31.777437    1740 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json ...
	I0918 11:52:31.777603    1740 start.go:128] duration metric: createHost completed in 14.974815417s
	I0918 11:52:31.777667    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.777884    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.777888    1740 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0918 11:52:31.833629    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695063151.431021085
	
	I0918 11:52:31.833635    1740 fix.go:206] guest clock: 1695063151.431021085
	I0918 11:52:31.833638    1740 fix.go:219] Guest: 2023-09-18 11:52:31.431021085 -0700 PDT Remote: 2023-09-18 11:52:31.777608 -0700 PDT m=+15.083726834 (delta=-346.586915ms)
	I0918 11:52:31.833654    1740 fix.go:190] guest clock delta is within tolerance: -346.586915ms
	I0918 11:52:31.833656    1740 start.go:83] releasing machines lock for "addons-221000", held for 15.0309195s
	I0918 11:52:31.833905    1740 ssh_runner.go:195] Run: cat /version.json
	I0918 11:52:31.833915    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.833930    1740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 11:52:31.833973    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.902596    1740 ssh_runner.go:195] Run: systemctl --version
	I0918 11:52:31.904777    1740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 11:52:31.906638    1740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 11:52:31.906668    1740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 11:52:31.911697    1740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 11:52:31.911704    1740 start.go:469] detecting cgroup driver to use...
	I0918 11:52:31.911799    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 11:52:31.917320    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0918 11:52:31.920500    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 11:52:31.923811    1740 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 11:52:31.923843    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 11:52:31.926950    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 11:52:31.929666    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 11:52:31.932680    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 11:52:31.936002    1740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 11:52:31.939362    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 11:52:31.942186    1740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 11:52:31.944734    1740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 11:52:31.947664    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:32.027464    1740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 11:52:32.036559    1740 start.go:469] detecting cgroup driver to use...
	I0918 11:52:32.036614    1740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 11:52:32.042440    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 11:52:32.047583    1740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 11:52:32.053840    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 11:52:32.058225    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 11:52:32.062305    1740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 11:52:32.098720    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 11:52:32.103440    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 11:52:32.108844    1740 ssh_runner.go:195] Run: which cri-dockerd
	I0918 11:52:32.110173    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 11:52:32.112731    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 11:52:32.117532    1740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 11:52:32.194769    1740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 11:52:32.269401    1740 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 11:52:32.269417    1740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0918 11:52:32.274373    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:32.355030    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 11:52:33.517984    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1629475s)
	I0918 11:52:33.518044    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 11:52:33.595160    1740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 11:52:33.670332    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 11:52:33.746578    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:33.822625    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 11:52:33.829925    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:33.909957    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0918 11:52:33.933398    1740 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 11:52:33.933487    1740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 11:52:33.935757    1740 start.go:537] Will wait 60s for crictl version
	I0918 11:52:33.935801    1740 ssh_runner.go:195] Run: which crictl
	I0918 11:52:33.937147    1740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 11:52:33.952602    1740 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0918 11:52:33.952673    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 11:52:33.962082    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 11:52:33.975334    1740 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0918 11:52:33.975416    1740 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0918 11:52:33.976970    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 11:52:33.980853    1740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:33.980897    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 11:52:33.986139    1740 docker.go:636] Got preloaded images: 
	I0918 11:52:33.986147    1740 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0918 11:52:33.986189    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 11:52:33.988954    1740 ssh_runner.go:195] Run: which lz4
	I0918 11:52:33.990479    1740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0918 11:52:33.991766    1740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 11:52:33.991780    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0918 11:52:35.310513    1740 docker.go:600] Took 1.320057 seconds to copy over tarball
	I0918 11:52:35.310582    1740 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 11:52:36.348518    1740 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.03793175s)
	I0918 11:52:36.348535    1740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 11:52:36.364745    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 11:52:36.368295    1740 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0918 11:52:36.373429    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:36.450305    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 11:52:38.940530    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.490231167s)
	I0918 11:52:38.940627    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 11:52:38.946699    1740 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 11:52:38.946709    1740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 11:52:38.946766    1740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 11:52:38.954428    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:38.954439    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:38.954458    1740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 11:52:38.954467    1740 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-221000 NodeName:addons-221000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 11:52:38.954540    1740 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-221000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 11:52:38.954592    1740 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-221000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 11:52:38.954661    1740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 11:52:38.957531    1740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 11:52:38.957562    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 11:52:38.960385    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0918 11:52:38.965592    1740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 11:52:38.970301    1740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0918 11:52:38.975165    1740 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0918 11:52:38.976400    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 11:52:38.980255    1740 certs.go:56] Setting up /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000 for IP: 192.168.105.2
	I0918 11:52:38.980276    1740 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac81ee65979b8c4f5ece6243c3a0190531689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:38.980470    1740 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key
	I0918 11:52:39.170828    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt ...
	I0918 11:52:39.170844    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt: {Name:mk0f303ee67627c25d1d04e1887861f15cdad763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.171150    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key ...
	I0918 11:52:39.171155    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key: {Name:mkc5e20e8161cfdcfc3d5dcd8300765ea2c12112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.171271    1740 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key
	I0918 11:52:39.287022    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt ...
	I0918 11:52:39.287027    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt: {Name:mk54c49c3c44ff09930e6c0f57238b89cff4c5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.287171    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key ...
	I0918 11:52:39.287173    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key: {Name:mk05faae5769358f82565f32c1f37a244f2478c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.287315    1740 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key
	I0918 11:52:39.287337    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt with IP's: []
	I0918 11:52:39.376234    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt ...
	I0918 11:52:39.376241    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: {Name:mkc8e654c6f2522197f557cb47d266f15eebaadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.376467    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key ...
	I0918 11:52:39.376471    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key: {Name:mkf345dd56f86115b31ecd965617f4c21d6a0cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.376571    1740 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969
	I0918 11:52:39.376580    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 11:52:39.429944    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 ...
	I0918 11:52:39.429952    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969: {Name:mkd69eb587bd0dc6ccdbaa88b78f4f92f2b47b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.430095    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969 ...
	I0918 11:52:39.430098    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969: {Name:mk76ea3a7fbbef2305f74e52afdf06cda921c8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.430199    1740 certs.go:337] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt
	I0918 11:52:39.430382    1740 certs.go:341] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key
	I0918 11:52:39.430499    1740 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key
	I0918 11:52:39.430509    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt with IP's: []
	I0918 11:52:39.698555    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt ...
	I0918 11:52:39.698563    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt: {Name:mk6d7a924ed10f0012b290ec4e0ea6bf1b7bfc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.698767    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key ...
	I0918 11:52:39.698773    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key: {Name:mk8d78e9179e4c57e4602e98d4fc6a37885b4d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.699037    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 11:52:39.699062    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem (1082 bytes)
	I0918 11:52:39.699081    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem (1123 bytes)
	I0918 11:52:39.699100    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem (1679 bytes)
	I0918 11:52:39.699418    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 11:52:39.707305    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 11:52:39.713956    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 11:52:39.720614    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 11:52:39.727643    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 11:52:39.734609    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 11:52:39.741305    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 11:52:39.748141    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 11:52:39.755277    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 11:52:39.762025    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 11:52:39.767701    1740 ssh_runner.go:195] Run: openssl version
	I0918 11:52:39.769871    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 11:52:39.772909    1740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.774536    1740 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.774555    1740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.776417    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 11:52:39.779698    1740 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 11:52:39.781162    1740 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 11:52:39.781200    1740 kubeadm.go:404] StartCluster: {Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:52:39.781263    1740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 11:52:39.787256    1740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 11:52:39.790167    1740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 11:52:39.792879    1740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 11:52:39.795890    1740 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 11:52:39.795906    1740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 11:52:39.820130    1740 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 11:52:39.820157    1740 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 11:52:39.874262    1740 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 11:52:39.874320    1740 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 11:52:39.874401    1740 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 11:52:39.936649    1740 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 11:52:39.946863    1740 out.go:204]   - Generating certificates and keys ...
	I0918 11:52:39.946906    1740 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 11:52:39.946940    1740 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 11:52:40.057135    1740 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 11:52:40.267412    1740 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 11:52:40.415260    1740 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 11:52:40.592293    1740 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 11:52:40.714190    1740 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 11:52:40.714252    1740 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-221000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 11:52:40.818329    1740 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 11:52:40.818397    1740 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-221000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 11:52:41.068370    1740 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 11:52:41.110794    1740 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 11:52:41.218301    1740 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 11:52:41.218335    1740 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 11:52:41.282421    1740 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 11:52:41.650315    1740 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 11:52:41.733907    1740 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 11:52:41.925252    1740 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 11:52:41.925561    1740 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 11:52:41.927413    1740 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 11:52:41.931680    1740 out.go:204]   - Booting up control plane ...
	I0918 11:52:41.931754    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 11:52:41.931794    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 11:52:41.931831    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 11:52:41.935171    1740 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 11:52:41.935565    1740 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 11:52:41.935586    1740 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 11:52:42.024365    1740 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 11:52:45.527756    1740 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.503453 seconds
	I0918 11:52:45.527852    1740 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 11:52:45.533290    1740 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 11:52:46.043984    1740 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 11:52:46.044088    1740 kubeadm.go:322] [mark-control-plane] Marking the node addons-221000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 11:52:46.548611    1740 kubeadm.go:322] [bootstrap-token] Using token: 0otx18.vbdfa1zgl84pbc1n
	I0918 11:52:46.552403    1740 out.go:204]   - Configuring RBAC rules ...
	I0918 11:52:46.552463    1740 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 11:52:46.553348    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 11:52:46.557357    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 11:52:46.558552    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 11:52:46.559879    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 11:52:46.560890    1740 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 11:52:46.567944    1740 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 11:52:46.739677    1740 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 11:52:46.956246    1740 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 11:52:46.958050    1740 kubeadm.go:322] 
	I0918 11:52:46.958085    1740 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 11:52:46.958096    1740 kubeadm.go:322] 
	I0918 11:52:46.958138    1740 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 11:52:46.958143    1740 kubeadm.go:322] 
	I0918 11:52:46.958157    1740 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 11:52:46.958186    1740 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 11:52:46.958221    1740 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 11:52:46.958226    1740 kubeadm.go:322] 
	I0918 11:52:46.958261    1740 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 11:52:46.958267    1740 kubeadm.go:322] 
	I0918 11:52:46.958291    1740 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 11:52:46.958294    1740 kubeadm.go:322] 
	I0918 11:52:46.958317    1740 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 11:52:46.958375    1740 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 11:52:46.958411    1740 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 11:52:46.958416    1740 kubeadm.go:322] 
	I0918 11:52:46.958458    1740 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 11:52:46.958503    1740 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 11:52:46.958507    1740 kubeadm.go:322] 
	I0918 11:52:46.958562    1740 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0otx18.vbdfa1zgl84pbc1n \
	I0918 11:52:46.958623    1740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 \
	I0918 11:52:46.958634    1740 kubeadm.go:322] 	--control-plane 
	I0918 11:52:46.958636    1740 kubeadm.go:322] 
	I0918 11:52:46.958676    1740 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 11:52:46.958681    1740 kubeadm.go:322] 
	I0918 11:52:46.958735    1740 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0otx18.vbdfa1zgl84pbc1n \
	I0918 11:52:46.958805    1740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 
	I0918 11:52:46.958862    1740 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 11:52:46.958868    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:46.958880    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:46.967096    1740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 11:52:46.970213    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 11:52:46.973479    1740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0918 11:52:46.977991    1740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 11:52:46.978031    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:46.978044    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=addons-221000 minikube.k8s.io/updated_at=2023_09_18T11_52_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.038282    1740 ops.go:34] apiserver oom_adj: -16
	I0918 11:52:47.038334    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.073908    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.620721    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:48.118781    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:48.620650    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:49.120663    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:49.620741    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:50.120032    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:50.620638    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:51.118777    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:51.618788    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:52.120234    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:52.619043    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:53.120606    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:53.619266    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:54.118979    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:54.618916    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:55.120620    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:55.620599    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:56.120585    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:56.618629    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:57.120587    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:57.618783    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:58.120645    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:58.620559    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:59.120556    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:59.619252    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:53:00.118796    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:53:00.169662    1740 kubeadm.go:1081] duration metric: took 13.191788791s to wait for elevateKubeSystemPrivileges.
	I0918 11:53:00.169677    1740 kubeadm.go:406] StartCluster complete in 20.38867075s
	I0918 11:53:00.169687    1740 settings.go:142] acquiring lock: {Name:mke420f28dda4f7a752738b3e6d571dc4216779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:53:00.169849    1740 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:53:00.170110    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/kubeconfig: {Name:mk07020c5b974cf07ca0cda25f72a521eb245fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:53:00.170308    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 11:53:00.170433    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:53:00.170379    1740 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0918 11:53:00.170512    1740 addons.go:69] Setting ingress=true in profile "addons-221000"
	I0918 11:53:00.170517    1740 addons.go:69] Setting ingress-dns=true in profile "addons-221000"
	I0918 11:53:00.170520    1740 addons.go:231] Setting addon ingress=true in "addons-221000"
	I0918 11:53:00.170523    1740 addons.go:231] Setting addon ingress-dns=true in "addons-221000"
	I0918 11:53:00.170533    1740 addons.go:69] Setting metrics-server=true in profile "addons-221000"
	I0918 11:53:00.170538    1740 addons.go:231] Setting addon metrics-server=true in "addons-221000"
	I0918 11:53:00.170552    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170556    1740 addons.go:69] Setting inspektor-gadget=true in profile "addons-221000"
	I0918 11:53:00.170560    1740 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-221000"
	I0918 11:53:00.170564    1740 addons.go:69] Setting gcp-auth=true in profile "addons-221000"
	I0918 11:53:00.170569    1740 mustload.go:65] Loading cluster: addons-221000
	I0918 11:53:00.170573    1740 addons.go:69] Setting registry=true in profile "addons-221000"
	I0918 11:53:00.170577    1740 addons.go:231] Setting addon registry=true in "addons-221000"
	I0918 11:53:00.170587    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170595    1740 addons.go:69] Setting default-storageclass=true in profile "addons-221000"
	I0918 11:53:00.170618    1740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-221000"
	I0918 11:53:00.170629    1740 addons.go:69] Setting storage-provisioner=true in profile "addons-221000"
	I0918 11:53:00.170639    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:53:00.170657    1740 addons.go:231] Setting addon storage-provisioner=true in "addons-221000"
	I0918 11:53:00.170705    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.170820    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.170825    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.170829    1740 addons.go:277] "addons-221000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0918 11:53:00.170831    1740 addons.go:277] "addons-221000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0918 11:53:00.170552    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170834    1740 addons.go:467] Verifying addon registry=true in "addons-221000"
	I0918 11:53:00.170556    1740 addons.go:69] Setting cloud-spanner=true in profile "addons-221000"
	I0918 11:53:00.175435    1740 out.go:177] * Verifying registry addon...
	I0918 11:53:00.170560    1740 addons.go:231] Setting addon inspektor-gadget=true in "addons-221000"
	I0918 11:53:00.170569    1740 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-221000"
	I0918 11:53:00.170865    1740 addons.go:231] Setting addon cloud-spanner=true in "addons-221000"
	I0918 11:53:00.170553    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170512    1740 addons.go:69] Setting volumesnapshots=true in profile "addons-221000"
	W0918 11:53:00.171149    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.171162    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	I0918 11:53:00.171465    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.188489    1740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 11:53:00.182591    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.182599    1740 addons.go:277] "addons-221000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0918 11:53:00.182603    1740 addons_storage_classes.go:55] "addons-221000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0918 11:53:00.182614    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.182623    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.182641    1740 addons.go:231] Setting addon volumesnapshots=true in "addons-221000"
	I0918 11:53:00.183250    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 11:53:00.196542    1740 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0918 11:53:00.192833    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.192879    1740 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 11:53:00.192896    1740 addons.go:467] Verifying addon ingress=true in "addons-221000"
	I0918 11:53:00.192903    1740 addons.go:231] Setting addon default-storageclass=true in "addons-221000"
	W0918 11:53:00.193202    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	I0918 11:53:00.196510    1740 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-221000" context rescaled to 1 replicas
	I0918 11:53:00.199509    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 11:53:00.199541    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 11:53:00.199554    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.199564    1740 addons.go:277] "addons-221000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0918 11:53:00.206530    1740 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0918 11:53:00.210356    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 11:53:00.210367    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.210376    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 11:53:00.210391    1740 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 11:53:00.211130    1740 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 11:53:00.213481    1740 out.go:177] * Verifying ingress addon...
	I0918 11:53:00.214506    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 11:53:00.214532    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.214630    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 11:53:00.218827    1740 out.go:177] * Verifying Kubernetes components...
	I0918 11:53:00.221560    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 11:53:00.218836    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.229444    1740 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 11:53:00.229456    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 11:53:00.217515    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 11:53:00.229466    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 11:53:00.229467    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.229472    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.220086    1740 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 11:53:00.221623    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 11:53:00.236499    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 11:53:00.234112    1740 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 11:53:00.242380    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 11:53:00.251491    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 11:53:00.252703    1740 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 11:53:00.260396    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 11:53:00.267323    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 11:53:00.274307    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 11:53:00.284458    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 11:53:00.287512    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 11:53:00.287521    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 11:53:00.287531    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.322513    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 11:53:00.322523    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 11:53:00.336060    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 11:53:00.340017    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 11:53:00.346686    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 11:53:00.346693    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 11:53:00.362466    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 11:53:00.362477    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 11:53:00.363923    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 11:53:00.363928    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 11:53:00.368153    1740 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 11:53:00.368161    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 11:53:00.376728    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 11:53:00.376740    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 11:53:00.395920    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 11:53:00.395931    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 11:53:00.403313    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 11:53:00.403320    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 11:53:00.404660    1740 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 11:53:00.404665    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 11:53:00.430003    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 11:53:00.430014    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 11:53:00.492665    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 11:53:00.492677    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 11:53:00.495149    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 11:53:00.495157    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 11:53:00.500095    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 11:53:00.522789    1740 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 11:53:00.522799    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 11:53:00.561433    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 11:53:00.561445    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 11:53:00.561578    1740 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:00.561584    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 11:53:00.579630    1740 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 11:53:00.579641    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 11:53:00.604202    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:00.607378    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 11:53:00.607389    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 11:53:00.624742    1740 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 11:53:00.624755    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 11:53:00.641432    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 11:53:00.641441    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 11:53:00.675716    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 11:53:00.675728    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 11:53:00.684962    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 11:53:00.684971    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 11:53:00.690643    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 11:53:00.690652    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 11:53:00.691887    1740 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 11:53:00.691893    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 11:53:00.701987    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 11:53:00.701999    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 11:53:00.719094    1740 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 11:53:00.719103    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0918 11:53:00.821368    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 11:53:00.824330    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 11:53:01.167637    1740 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0918 11:53:01.168100    1740 node_ready.go:35] waiting up to 6m0s for node "addons-221000" to be "Ready" ...
	I0918 11:53:01.169940    1740 node_ready.go:49] node "addons-221000" has status "Ready":"True"
	I0918 11:53:01.169963    1740 node_ready.go:38] duration metric: took 1.836333ms waiting for node "addons-221000" to be "Ready" ...
	I0918 11:53:01.169968    1740 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 11:53:01.172987    1740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:01.659785    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.159683334s)
	I0918 11:53:01.659805    1740 addons.go:467] Verifying addon metrics-server=true in "addons-221000"
	I0918 11:53:01.659835    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.055627667s)
	W0918 11:53:01.659861    1740 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 11:53:01.659882    1740 retry.go:31] will retry after 288.841008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 11:53:01.660144    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320131291s)
	I0918 11:53:01.949280    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:02.183526    1740 pod_ready.go:92] pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.183539    1740 pod_ready.go:81] duration metric: took 1.010553542s waiting for pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.183545    1740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.184891    1740 pod_ready.go:97] error getting pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z4cmf" not found
	I0918 11:53:02.184904    1740 pod_ready.go:81] duration metric: took 1.354709ms waiting for pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace to be "Ready" ...
	E0918 11:53:02.184909    1740 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z4cmf" not found
	I0918 11:53:02.184914    1740 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.187700    1740 pod_ready.go:92] pod "etcd-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.187709    1740 pod_ready.go:81] duration metric: took 2.791875ms waiting for pod "etcd-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.187714    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.190228    1740 pod_ready.go:92] pod "kube-apiserver-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.190236    1740 pod_ready.go:81] duration metric: took 2.518208ms waiting for pod "kube-apiserver-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.190240    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.571748    1740 pod_ready.go:92] pod "kube-controller-manager-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.571761    1740 pod_ready.go:81] duration metric: took 381.518375ms waiting for pod "kube-controller-manager-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.571765    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7gqn" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.823208    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.998874458s)
	I0918 11:53:02.823230    1740 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-221000"
	I0918 11:53:02.828329    1740 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 11:53:02.838784    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 11:53:02.842760    1740 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 11:53:02.842767    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:02.847417    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:02.972129    1740 pod_ready.go:92] pod "kube-proxy-q7gqn" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.972138    1740 pod_ready.go:81] duration metric: took 400.3735ms waiting for pod "kube-proxy-q7gqn" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.972143    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:03.351863    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:03.371796    1740 pod_ready.go:92] pod "kube-scheduler-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:03.371806    1740 pod_ready.go:81] duration metric: took 399.662875ms waiting for pod "kube-scheduler-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:03.371810    1740 pod_ready.go:38] duration metric: took 2.201856875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 11:53:03.371820    1740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 11:53:03.371875    1740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 11:53:03.851785    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:04.352157    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:04.679993    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73071775s)
	I0918 11:53:04.680002    1740 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.308127625s)
	I0918 11:53:04.680021    1740 api_server.go:72] duration metric: took 4.46545525s to wait for apiserver process to appear ...
	I0918 11:53:04.680025    1740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 11:53:04.680031    1740 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0918 11:53:04.683995    1740 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0918 11:53:04.684750    1740 api_server.go:141] control plane version: v1.28.2
	I0918 11:53:04.684756    1740 api_server.go:131] duration metric: took 4.728917ms to wait for apiserver health ...
	I0918 11:53:04.684760    1740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 11:53:04.689157    1740 system_pods.go:59] 13 kube-system pods found
	I0918 11:53:04.689166    1740 system_pods.go:61] "coredns-5dd5756b68-mbgns" [376db80e-bef7-49a8-805c-d250bbb5ddc5] Running
	I0918 11:53:04.689171    1740 system_pods.go:61] "csi-hostpath-attacher-0" [b3eb2340-f156-4127-b844-79013849b5d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 11:53:04.689175    1740 system_pods.go:61] "csi-hostpath-resizer-0" [a39fc7e1-21e6-43e0-8a71-76fc3122aa67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 11:53:04.689182    1740 system_pods.go:61] "csi-hostpathplugin-s878j" [f7db805b-46b1-4d4f-b620-4bb9732a0ba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 11:53:04.689185    1740 system_pods.go:61] "etcd-addons-221000" [4500c338-1b0c-4d39-b5b6-76d42cf285f5] Running
	I0918 11:53:04.689188    1740 system_pods.go:61] "kube-apiserver-addons-221000" [93a9cc2e-3463-4332-a37c-86437106ed5e] Running
	I0918 11:53:04.689190    1740 system_pods.go:61] "kube-controller-manager-addons-221000" [eaeef50f-2b51-43b2-91c9-a4f97f5460ae] Running
	I0918 11:53:04.689193    1740 system_pods.go:61] "kube-proxy-q7gqn" [e971c33c-7d1b-47b7-9ff5-3a629f12fb57] Running
	I0918 11:53:04.689195    1740 system_pods.go:61] "kube-scheduler-addons-221000" [9d39af63-6f0a-4855-9435-f8e7af26869e] Running
	I0918 11:53:04.689199    1740 system_pods.go:61] "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 11:53:04.689205    1740 system_pods.go:61] "snapshot-controller-58dbcc7b99-89j9m" [d9a96c2a-2231-4dea-abbf-16875dd2b1d7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.689210    1740 system_pods.go:61] "snapshot-controller-58dbcc7b99-xwwxn" [5a80446c-a3a8-4ce7-8ac4-2894087691fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.689214    1740 system_pods.go:61] "storage-provisioner" [88ff8527-97e2-4317-8d5b-a2502e8cb7f7] Running
	I0918 11:53:04.689217    1740 system_pods.go:74] duration metric: took 4.45475ms to wait for pod list to return data ...
	I0918 11:53:04.689220    1740 default_sa.go:34] waiting for default service account to be created ...
	I0918 11:53:04.690951    1740 default_sa.go:45] found service account: "default"
	I0918 11:53:04.690958    1740 default_sa.go:55] duration metric: took 1.736083ms for default service account to be created ...
	I0918 11:53:04.690961    1740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 11:53:04.694962    1740 system_pods.go:86] 13 kube-system pods found
	I0918 11:53:04.694971    1740 system_pods.go:89] "coredns-5dd5756b68-mbgns" [376db80e-bef7-49a8-805c-d250bbb5ddc5] Running
	I0918 11:53:04.694976    1740 system_pods.go:89] "csi-hostpath-attacher-0" [b3eb2340-f156-4127-b844-79013849b5d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 11:53:04.694979    1740 system_pods.go:89] "csi-hostpath-resizer-0" [a39fc7e1-21e6-43e0-8a71-76fc3122aa67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 11:53:04.694983    1740 system_pods.go:89] "csi-hostpathplugin-s878j" [f7db805b-46b1-4d4f-b620-4bb9732a0ba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 11:53:04.694986    1740 system_pods.go:89] "etcd-addons-221000" [4500c338-1b0c-4d39-b5b6-76d42cf285f5] Running
	I0918 11:53:04.694988    1740 system_pods.go:89] "kube-apiserver-addons-221000" [93a9cc2e-3463-4332-a37c-86437106ed5e] Running
	I0918 11:53:04.694990    1740 system_pods.go:89] "kube-controller-manager-addons-221000" [eaeef50f-2b51-43b2-91c9-a4f97f5460ae] Running
	I0918 11:53:04.694994    1740 system_pods.go:89] "kube-proxy-q7gqn" [e971c33c-7d1b-47b7-9ff5-3a629f12fb57] Running
	I0918 11:53:04.694996    1740 system_pods.go:89] "kube-scheduler-addons-221000" [9d39af63-6f0a-4855-9435-f8e7af26869e] Running
	I0918 11:53:04.694999    1740 system_pods.go:89] "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 11:53:04.695003    1740 system_pods.go:89] "snapshot-controller-58dbcc7b99-89j9m" [d9a96c2a-2231-4dea-abbf-16875dd2b1d7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.695006    1740 system_pods.go:89] "snapshot-controller-58dbcc7b99-xwwxn" [5a80446c-a3a8-4ce7-8ac4-2894087691fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.695009    1740 system_pods.go:89] "storage-provisioner" [88ff8527-97e2-4317-8d5b-a2502e8cb7f7] Running
	I0918 11:53:04.695012    1740 system_pods.go:126] duration metric: took 4.049ms to wait for k8s-apps to be running ...
	I0918 11:53:04.695014    1740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 11:53:04.695074    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 11:53:04.700754    1740 system_svc.go:56] duration metric: took 5.736541ms WaitForService to wait for kubelet.
	I0918 11:53:04.700761    1740 kubeadm.go:581] duration metric: took 4.486195916s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 11:53:04.700771    1740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 11:53:04.702224    1740 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0918 11:53:04.702234    1740 node_conditions.go:123] node cpu capacity is 2
	I0918 11:53:04.702239    1740 node_conditions.go:105] duration metric: took 1.466083ms to run NodePressure ...
	I0918 11:53:04.702244    1740 start.go:228] waiting for startup goroutines ...
	I0918 11:53:04.851794    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:05.352908    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:05.851843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.351684    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.787954    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 11:53:06.787972    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:06.819465    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 11:53:06.824867    1740 addons.go:231] Setting addon gcp-auth=true in "addons-221000"
	I0918 11:53:06.824887    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:06.825624    1740 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 11:53:06.825631    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:06.851875    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.860327    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 11:53:06.868318    1740 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0918 11:53:06.871302    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 11:53:06.871308    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 11:53:06.876186    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 11:53:06.876191    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 11:53:06.881437    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 11:53:06.881443    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0918 11:53:06.887709    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 11:53:07.124491    1740 addons.go:467] Verifying addon gcp-auth=true in "addons-221000"
	I0918 11:53:07.129030    1740 out.go:177] * Verifying gcp-auth addon...
	I0918 11:53:07.137330    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 11:53:07.139319    1740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 11:53:07.139325    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.142056    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.351786    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:07.647787    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.851746    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:08.144923    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:08.351959    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:08.646023    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:08.851939    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:09.146053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:09.352896    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:09.645776    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:09.851792    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:10.146843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:10.351428    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:10.645894    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:10.852236    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:11.145951    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:11.352216    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:11.645774    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:11.852232    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:12.145929    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:12.355119    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:12.840629    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:12.851603    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:13.145852    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:13.351962    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:13.646245    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:13.852112    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:14.144894    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:14.351204    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:14.646046    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.059939    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:15.145929    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.351906    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:15.646033    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.852022    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:16.145893    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:16.351976    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:16.645740    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:16.852007    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:17.145547    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:17.352003    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:17.646147    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:17.853011    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:18.145960    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:18.353519    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:18.646257    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:18.851829    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:19.143968    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:19.351778    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:19.645563    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:19.851678    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:20.145637    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:20.351425    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:20.645727    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:20.852080    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:21.145053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:21.352010    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:21.645631    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:21.851983    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:22.146102    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:22.351559    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:22.645364    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:22.851995    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:23.145511    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:23.351664    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:23.646427    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:23.851813    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:24.144653    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:24.351670    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:24.645732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:24.851659    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:25.145755    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:25.350286    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:25.645572    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:25.851968    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:26.145692    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:26.352042    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:26.645575    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:26.852307    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:27.145498    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:27.352114    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:27.645719    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:27.851965    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:28.145987    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:28.351772    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:28.645594    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:28.851993    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:29.145747    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:29.351502    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:29.645697    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:29.851953    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:30.145495    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:30.351423    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:30.645330    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:30.852142    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:31.145422    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:31.352025    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:31.645572    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:31.852021    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:32.145742    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:32.351596    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:32.645761    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:32.851846    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:33.145666    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:33.351607    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:33.646262    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:33.852115    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:34.145556    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:34.353136    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:34.644719    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:34.852049    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:35.145343    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:35.351551    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:35.645361    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:35.851476    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:36.143766    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:36.351679    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:36.645788    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:36.851584    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:37.145933    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:37.351445    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:37.646034    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:37.851557    1740 kapi.go:107] duration metric: took 35.013102459s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 11:53:38.145661    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:38.645759    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:39.145382    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:39.645858    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:40.145851    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:40.645842    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:41.145511    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:41.646176    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:42.145937    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:42.645521    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:43.145380    1740 kapi.go:107] duration metric: took 36.008387458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 11:53:43.148984    1740 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-221000 cluster.
	I0918 11:53:43.152958    1740 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 11:53:43.156871    1740 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 11:59:00.190235    1740 kapi.go:107] duration metric: took 6m0.011650291s to wait for kubernetes.io/minikube-addons=registry ...
	W0918 11:59:00.190354    1740 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0918 11:59:00.236861    1740 kapi.go:107] duration metric: took 6m0.007418791s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0918 11:59:00.236887    1740 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0918 11:59:00.245622    1740 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, inspektor-gadget, volumesnapshots, csi-hostpath-driver, gcp-auth
	I0918 11:59:00.252559    1740 addons.go:502] enable addons completed in 6m0.086876584s: enabled=[ingress-dns cloud-spanner default-storageclass metrics-server storage-provisioner inspektor-gadget volumesnapshots csi-hostpath-driver gcp-auth]
	I0918 11:59:00.252570    1740 start.go:233] waiting for cluster config update ...
	I0918 11:59:00.252577    1740 start.go:242] writing updated cluster config ...
	I0918 11:59:00.253046    1740 ssh_runner.go:195] Run: rm -f paused
	I0918 11:59:00.282509    1740 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0918 11:59:00.285533    1740 out.go:177] * Done! kubectl is now configured to use "addons-221000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-18 18:52:28 UTC, ends at Mon 2023-09-18 19:12:19 UTC. --
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.378956054Z" level=info msg="shim disconnected" id=db4658d155f2578fc668ba94399fd4b64164a06274f25bb873a3470c521d2b3b namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.378988221Z" level=warning msg="cleaning up after shim disconnected" id=db4658d155f2578fc668ba94399fd4b64164a06274f25bb873a3470c521d2b3b namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.378992929Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1100]: time="2023-09-18T19:12:04.436636476Z" level=info msg="ignoring event" container=f31f590bd070326784aa894bb3b130e04a21f13ff6b7835f98e9c9c994be5c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.437103975Z" level=info msg="shim disconnected" id=f31f590bd070326784aa894bb3b130e04a21f13ff6b7835f98e9c9c994be5c89 namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.437132058Z" level=warning msg="cleaning up after shim disconnected" id=f31f590bd070326784aa894bb3b130e04a21f13ff6b7835f98e9c9c994be5c89 namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.437137642Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.441266547Z" level=warning msg="cleanup warnings time=\"2023-09-18T19:12:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1100]: time="2023-09-18T19:12:10.222057782Z" level=info msg="ignoring event" container=ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.222121782Z" level=info msg="shim disconnected" id=ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.222148615Z" level=warning msg="cleaning up after shim disconnected" id=ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.222152574Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1100]: time="2023-09-18T19:12:10.284872898Z" level=info msg="ignoring event" container=b759b3bc87e672ab6709e17bb94b6e35eee5335882a2a4303af5bb7802b4d324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.284977606Z" level=info msg="shim disconnected" id=b759b3bc87e672ab6709e17bb94b6e35eee5335882a2a4303af5bb7802b4d324 namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.285020689Z" level=warning msg="cleaning up after shim disconnected" id=b759b3bc87e672ab6709e17bb94b6e35eee5335882a2a4303af5bb7802b4d324 namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.285025106Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:10 addons-221000 cri-dockerd[995]: time="2023-09-18T19:12:10Z" level=error msg="EOF Failed to get stats from container ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a"
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.398430134Z" level=info msg="shim disconnected" id=afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1100]: time="2023-09-18T19:12:14.398477592Z" level=info msg="ignoring event" container=afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.398687758Z" level=warning msg="cleaning up after shim disconnected" id=afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.398698092Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.419717366Z" level=info msg="shim disconnected" id=a7e6c06852560a1d0bdec45aab8087733475e5299415ad38f796c38431947c7a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.419822949Z" level=warning msg="cleaning up after shim disconnected" id=a7e6c06852560a1d0bdec45aab8087733475e5299415ad38f796c38431947c7a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.419832116Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1100]: time="2023-09-18T19:12:14.420080948Z" level=info msg="ignoring event" container=a7e6c06852560a1d0bdec45aab8087733475e5299415ad38f796c38431947c7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID
	e55be7fdce01f       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98          About a minute ago   Running             headlamp                  0                   6eea9635aa263
	6009e365438d8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   18 minutes ago       Running             gcp-auth                  0                   e321131e2d88d
	96b1e19d34d0f       ba04bb24b9575                                                                                                  19 minutes ago       Running             storage-provisioner       0                   31030bebae5dd
	3397ed73112e1       97e04611ad434                                                                                                  19 minutes ago       Running             coredns                   0                   ecf214ed85c34
	3b8b236037bf7       7da62c127fc0f                                                                                                  19 minutes ago       Running             kube-proxy                0                   efd5b0f304a7c
	17d16f9191cb9       64fc40cee3716                                                                                                  19 minutes ago       Running             kube-scheduler            0                   6dcf2ed48fa0d
	cb85fb8fd00cf       89d57b83c1786                                                                                                  19 minutes ago       Running             kube-controller-manager   0                   d0674da883f4f
	4a8eb16a561d8       30bb499447fe1                                                                                                  19 minutes ago       Running             kube-apiserver            0                   2a8be15cc8448
	2a7ec1fe69df2       9cdd6470f48c8                                                                                                  19 minutes ago       Running             etcd                      0                   0f317e031c491
	
	* 
	* ==> coredns [3397ed73112e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51523 - 34394 "HINFO IN 2890994018648264973.7586751697303130781. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046989834s
	[INFO] 10.244.0.11:51746 - 24273 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125498s
	[INFO] 10.244.0.11:38492 - 22517 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252786s
	[INFO] 10.244.0.11:44956 - 48338 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000035291s
	[INFO] 10.244.0.11:34979 - 63104 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059957s
	[INFO] 10.244.0.11:38551 - 60811 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000037916s
	[INFO] 10.244.0.11:44454 - 13651 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000042082s
	[INFO] 10.244.0.11:41401 - 32727 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001047395s
	[INFO] 10.244.0.11:40924 - 8451 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001040478s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-221000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-221000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=addons-221000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T11_52_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-221000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 18:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-221000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:12:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-221000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 605bc2fc72a045ae88e907db961da3d3
	  System UUID:                605bc2fc72a045ae88e907db961da3d3
	  Boot ID:                    6a9990c2-fe5e-48d8-97ca-ea50d8c8e3b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-2vm8d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  headlamp                    headlamp-699c48fb74-6fg5z                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 coredns-5dd5756b68-mbgns                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-221000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-221000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-221000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-q7gqn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-221000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node addons-221000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node addons-221000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node addons-221000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m   kubelet          Node addons-221000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-221000 event: Registered Node addons-221000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.641705] EINJ: EINJ table not found.
	[  +0.512113] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044199] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000795] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.076142] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.068934] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.415221] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.169313] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.075007] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.084392] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.144805] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.094980] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[  +0.076494] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +0.074272] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.076208] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.088287] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.538937] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
	[  +2.473246] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.092849] systemd-fstab-generator[1411]: Ignoring "noauto" for root device
	[  +4.633357] systemd-fstab-generator[2291]: Ignoring "noauto" for root device
	[Sep18 18:53] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.646092] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.523270] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.548658] kauditd_printk_skb: 8 callbacks suppressed
	[Sep18 19:12] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [2a7ec1fe69df] <==
	* {"level":"info","ts":"2023-09-18T18:52:43.023404Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.023573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T18:52:43.02399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-18T18:52:43.024241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T18:52:43.02431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-18T18:52:43.024433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.024491Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.025343Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:53:12.839558Z","caller":"traceutil/trace.go:171","msg":"trace[499305584] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:692; }","duration":"193.895541ms","start":"2023-09-18T18:53:12.645653Z","end":"2023-09-18T18:53:12.839549Z","steps":["trace[499305584] 'read index received'  (duration: 193.791791ms)","trace[499305584] 'applied index is now lower than readState.Index'  (duration: 103.417µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:53:12.839607Z","caller":"traceutil/trace.go:171","msg":"trace[593448589] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"309.762625ms","start":"2023-09-18T18:53:12.529838Z","end":"2023-09-18T18:53:12.839601Z","steps":["trace[593448589] 'process raft request'  (duration: 309.63075ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:53:12.839655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.989167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10532"}
	{"level":"info","ts":"2023-09-18T18:53:12.839671Z","caller":"traceutil/trace.go:171","msg":"trace[1312627899] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:671; }","duration":"194.031ms","start":"2023-09-18T18:53:12.645637Z","end":"2023-09-18T18:53:12.839668Z","steps":["trace[1312627899] 'agreement among raft nodes before linearized reading'  (duration: 193.963833ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:53:12.8398Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-18T18:53:12.529832Z","time spent":"309.7915ms","remote":"127.0.0.1:54556","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:669 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-18T18:53:15.058529Z","caller":"traceutil/trace.go:171","msg":"trace[42488278] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:693; }","duration":"207.837416ms","start":"2023-09-18T18:53:14.850683Z","end":"2023-09-18T18:53:15.05852Z","steps":["trace[42488278] 'read index received'  (duration: 207.76525ms)","trace[42488278] 'applied index is now lower than readState.Index'  (duration: 71.708µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:53:15.058651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.966125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:13 size:63849"}
	{"level":"info","ts":"2023-09-18T18:53:15.058696Z","caller":"traceutil/trace.go:171","msg":"trace[2008509018] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:13; response_revision:672; }","duration":"208.018666ms","start":"2023-09-18T18:53:14.850673Z","end":"2023-09-18T18:53:15.058691Z","steps":["trace[2008509018] 'agreement among raft nodes before linearized reading'  (duration: 207.886666ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:53:15.058824Z","caller":"traceutil/trace.go:171","msg":"trace[1590636669] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"214.775084ms","start":"2023-09-18T18:53:14.844046Z","end":"2023-09-18T18:53:15.058821Z","steps":["trace[1590636669] 'process raft request'  (duration: 214.424584ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T19:02:43.443637Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2023-09-18T19:02:43.458272Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1094,"took":"14.26046ms","hash":1108450172}
	{"level":"info","ts":"2023-09-18T19:02:43.458292Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1108450172,"revision":1094,"compact-revision":-1}
	{"level":"info","ts":"2023-09-18T19:07:43.446092Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1454}
	{"level":"info","ts":"2023-09-18T19:07:43.44678Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1454,"took":"494.332µs","hash":2695871163}
	{"level":"info","ts":"2023-09-18T19:07:43.446796Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2695871163,"revision":1454,"compact-revision":1094}
	{"level":"info","ts":"2023-09-18T19:11:06.589478Z","caller":"traceutil/trace.go:171","msg":"trace[847243809] transaction","detail":"{read_only:false; response_revision:2088; number_of_response:1; }","duration":"105.795246ms","start":"2023-09-18T19:11:06.483671Z","end":"2023-09-18T19:11:06.589466Z","steps":["trace[847243809] 'process raft request'  (duration: 76.680577ms)","trace[847243809] 'compare'  (duration: 28.988461ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T19:11:19.569525Z","caller":"traceutil/trace.go:171","msg":"trace[199883351] transaction","detail":"{read_only:false; response_revision:2131; number_of_response:1; }","duration":"161.881881ms","start":"2023-09-18T19:11:19.407634Z","end":"2023-09-18T19:11:19.569516Z","steps":["trace[199883351] 'process raft request'  (duration: 161.812756ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [6009e365438d] <==
	* 2023/09/18 18:53:42 GCP Auth Webhook started!
	2023/09/18 19:11:01 Ready to marshal response ...
	2023/09/18 19:11:01 Ready to write response ...
	2023/09/18 19:11:01 Ready to marshal response ...
	2023/09/18 19:11:01 Ready to write response ...
	2023/09/18 19:11:01 Ready to marshal response ...
	2023/09/18 19:11:01 Ready to write response ...
	2023/09/18 19:11:15 Ready to marshal response ...
	2023/09/18 19:11:15 Ready to write response ...
	2023/09/18 19:11:48 Ready to marshal response ...
	2023/09/18 19:11:48 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:12:19 up 19 min,  0 users,  load average: 0.23, 0.19, 0.18
	Linux addons-221000 5.10.57 #1 SMP PREEMPT Fri Sep 15 19:03:18 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4a8eb16a561d] <==
	* I0918 19:10:44.200176       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:11:01.618081       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.100.41"}
	I0918 19:11:27.087628       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:11:44.199807       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.823931       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.823948       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.825604       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.825722       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.833256       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.833284       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.846841       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.846862       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.847044       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.847095       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.856162       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.856179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.857357       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.857367       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0918 19:12:04.848160       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:12:04.857030       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:12:04.865591       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0918 19:12:11.428741       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0918 19:12:14.330149       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0918 19:12:14.335152       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:12:15.339237       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	* 
	* ==> kube-controller-manager [cb85fb8fd00c] <==
	* E0918 19:12:04.866260       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:05.815227       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:05.815247       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:06.314592       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:06.314612       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:06.404971       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:06.404987       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:07.765663       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:07.765681       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:09.044305       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:09.044327       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0918 19:12:09.124215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="1.875µs"
	W0918 19:12:09.340711       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:09.340733       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:12.778356       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:12.778375       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:13.814310       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:13.814362       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:15.053536       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:15.053562       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:15.339981       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:16.531417       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:16.531440       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:18.180272       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:18.180294       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [3b8b236037bf] <==
	* I0918 18:53:00.480730       1 server_others.go:69] "Using iptables proxy"
	I0918 18:53:00.488175       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0918 18:53:00.508601       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0918 18:53:00.508621       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 18:53:00.509405       1 server_others.go:152] "Using iptables Proxier"
	I0918 18:53:00.509431       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 18:53:00.509518       1 server.go:846] "Version info" version="v1.28.2"
	I0918 18:53:00.509524       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 18:53:00.512081       1 config.go:188] "Starting service config controller"
	I0918 18:53:00.512089       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 18:53:00.512105       1 config.go:97] "Starting endpoint slice config controller"
	I0918 18:53:00.512107       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 18:53:00.512284       1 config.go:315] "Starting node config controller"
	I0918 18:53:00.512287       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 18:53:01.015326       1 shared_informer.go:318] Caches are synced for node config
	I0918 18:53:01.015354       1 shared_informer.go:318] Caches are synced for service config
	I0918 18:53:01.015384       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [17d16f9191cb] <==
	* W0918 18:52:43.862760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:52:43.863344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 18:52:43.862808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 18:52:43.863392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 18:52:43.862823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 18:52:43.863442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 18:52:43.862835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 18:52:43.863451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0918 18:52:43.862850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:52:43.863466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:52:43.862867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 18:52:43.863530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0918 18:52:43.862878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 18:52:43.863565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0918 18:52:43.862692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 18:52:43.863575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 18:52:43.863300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 18:52:43.863661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 18:52:44.751836       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:52:44.751859       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:52:44.752492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 18:52:44.752505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0918 18:52:44.760405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 18:52:44.760417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0918 18:52:46.760908       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-18 18:52:28 UTC, ends at Mon 2023-09-18 19:12:20 UTC. --
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524125    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-run\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524134    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-host\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524142    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-bpffs\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524150    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-modules\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524157    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-debugfs\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524165    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-cgroup\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524209    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-cgroup" (OuterVolumeSpecName: "cgroup") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524422    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-bpffs" (OuterVolumeSpecName: "bpffs") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524436    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-run" (OuterVolumeSpecName: "run") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524443    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-host" (OuterVolumeSpecName: "host") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524450    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-modules" (OuterVolumeSpecName: "modules") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524522    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-debugfs" (OuterVolumeSpecName: "debugfs") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.525925    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eea1399-50e6-40e6-8424-bacd2f982bff-kube-api-access-pfdln" (OuterVolumeSpecName: "kube-api-access-pfdln") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "kube-api-access-pfdln". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625197    2297 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-bpffs\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625210    2297 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-host\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625217    2297 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-cgroup\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625222    2297 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-modules\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625227    2297 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-debugfs\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625233    2297 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pfdln\" (UniqueName: \"kubernetes.io/projected/7eea1399-50e6-40e6-8424-bacd2f982bff-kube-api-access-pfdln\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625238    2297 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-run\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:15 addons-221000 kubelet[2297]: I0918 19:12:15.398217    2297 scope.go:117] "RemoveContainer" containerID="afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:15 addons-221000 kubelet[2297]: I0918 19:12:15.407228    2297 scope.go:117] "RemoveContainer" containerID="afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:15 addons-221000 kubelet[2297]: E0918 19:12:15.407605    2297 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a" containerID="afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:15 addons-221000 kubelet[2297]: I0918 19:12:15.407627    2297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"} err="failed to get container status \"afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a\": rpc error: code = Unknown desc = Error response from daemon: No such container: afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:16 addons-221000 kubelet[2297]: I0918 19:12:16.788684    2297 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7eea1399-50e6-40e6-8424-bacd2f982bff" path="/var/lib/kubelet/pods/7eea1399-50e6-40e6-8424-bacd2f982bff/volumes"
	
	* 
	* ==> storage-provisioner [96b1e19d34d0] <==
	* I0918 18:53:02.494940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 18:53:02.503242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 18:53:02.503674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 18:53:02.507046       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 18:53:02.507107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9!
	I0918 18:53:02.507893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"993ff549-fac0-4a25-b8bc-6e13c7f3eb70", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9 became leader
	I0918 18:53:02.607443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-221000 -n addons-221000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-221000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (805.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-221000 -n addons-221000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-09-18 12:11:00.386318 -0700 PDT m=+1156.802890167
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-221000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-221000: exit status 10 (1m24.272363125s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-221000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-221000 -n addons-221000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-221000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:51 PDT |                     |
	|         | -p download-only-242000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |                     |
	|         | -p download-only-242000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| delete  | -p download-only-242000        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| delete  | -p download-only-242000        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| start   | --download-only -p             | binary-mirror-077000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |                     |
	|         | binary-mirror-077000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49414         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-077000        | binary-mirror-077000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:52 PDT |
	| start   | -p addons-221000               | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT | 18 Sep 23 11:59 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT |                     |
	|         | addons-221000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT | 18 Sep 23 12:11 PDT |
	|         | -p addons-221000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-221000 addons           | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:11 PDT | 18 Sep 23 12:12 PDT |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-221000 addons           | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:12 PDT | 18 Sep 23 12:12 PDT |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-221000 addons           | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:12 PDT | 18 Sep 23 12:12 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-221000        | jenkins | v1.31.2 | 18 Sep 23 12:12 PDT | 18 Sep 23 12:12 PDT |
	|         | addons-221000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 11:52:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 11:52:16.711602    1740 out.go:296] Setting OutFile to fd 1 ...
	I0918 11:52:16.711748    1740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:16.711751    1740 out.go:309] Setting ErrFile to fd 2...
	I0918 11:52:16.711753    1740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:16.711880    1740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 11:52:16.712918    1740 out.go:303] Setting JSON to false
	I0918 11:52:16.728001    1740 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1310,"bootTime":1695061826,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 11:52:16.728087    1740 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 11:52:16.732378    1740 out.go:177] * [addons-221000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 11:52:16.739454    1740 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 11:52:16.743421    1740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:52:16.739507    1740 notify.go:220] Checking for updates...
	I0918 11:52:16.749403    1740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 11:52:16.752377    1740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 11:52:16.755381    1740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 11:52:16.758417    1740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 11:52:16.761446    1740 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 11:52:16.765371    1740 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 11:52:16.777355    1740 start.go:298] selected driver: qemu2
	I0918 11:52:16.777364    1740 start.go:902] validating driver "qemu2" against <nil>
	I0918 11:52:16.777372    1740 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 11:52:16.779390    1740 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 11:52:16.782385    1740 out.go:177] * Automatically selected the socket_vmnet network
	I0918 11:52:16.785462    1740 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 11:52:16.785488    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:16.785496    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:16.785507    1740 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 11:52:16.785513    1740 start_flags.go:321] config:
	{Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0918 11:52:16.789634    1740 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 11:52:16.798394    1740 out.go:177] * Starting control plane node addons-221000 in cluster addons-221000
	I0918 11:52:16.802195    1740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:16.802217    1740 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 11:52:16.802234    1740 cache.go:57] Caching tarball of preloaded images
	I0918 11:52:16.802301    1740 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 11:52:16.802315    1740 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 11:52:16.802542    1740 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json ...
	I0918 11:52:16.802555    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json: {Name:mk6624c585fbc7911138df2cd59d1f2e10251cf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:16.802799    1740 start.go:365] acquiring machines lock for addons-221000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 11:52:16.802873    1740 start.go:369] acquired machines lock for "addons-221000" in 68.417µs
	I0918 11:52:16.802886    1740 start.go:93] Provisioning new machine with config: &{Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 11:52:16.802925    1740 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 11:52:16.810242    1740 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 11:52:17.161676    1740 start.go:159] libmachine.API.Create for "addons-221000" (driver="qemu2")
	I0918 11:52:17.161722    1740 client.go:168] LocalClient.Create starting
	I0918 11:52:17.161932    1740 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 11:52:17.253776    1740 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 11:52:17.312301    1740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 11:52:17.776256    1740 main.go:141] libmachine: Creating SSH key...
	I0918 11:52:17.897328    1740 main.go:141] libmachine: Creating Disk image...
	I0918 11:52:17.897334    1740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 11:52:17.897524    1740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:17.933044    1740 main.go:141] libmachine: STDOUT: 
	I0918 11:52:17.933072    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:17.933136    1740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2 +20000M
	I0918 11:52:17.940597    1740 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 11:52:17.940609    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:17.940623    1740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:17.940628    1740 main.go:141] libmachine: Starting QEMU VM...
	I0918 11:52:17.940657    1740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ae:e8:0a:fd:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/disk.qcow2
	I0918 11:52:18.008779    1740 main.go:141] libmachine: STDOUT: 
	I0918 11:52:18.008804    1740 main.go:141] libmachine: STDERR: 
	I0918 11:52:18.008808    1740 main.go:141] libmachine: Attempt 0
	I0918 11:52:18.008820    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:20.011046    1740 main.go:141] libmachine: Attempt 1
	I0918 11:52:20.011124    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:22.013479    1740 main.go:141] libmachine: Attempt 2
	I0918 11:52:22.013559    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:24.015632    1740 main.go:141] libmachine: Attempt 3
	I0918 11:52:24.015645    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:26.017675    1740 main.go:141] libmachine: Attempt 4
	I0918 11:52:26.017681    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:28.018761    1740 main.go:141] libmachine: Attempt 5
	I0918 11:52:28.018782    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:30.020886    1740 main.go:141] libmachine: Attempt 6
	I0918 11:52:30.020920    1740 main.go:141] libmachine: Searching for ce:ae:e8:a:fd:16 in /var/db/dhcpd_leases ...
	I0918 11:52:30.021070    1740 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 11:52:30.021123    1740 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6509edec}
	I0918 11:52:30.021130    1740 main.go:141] libmachine: Found match: ce:ae:e8:a:fd:16
	I0918 11:52:30.021140    1740 main.go:141] libmachine: IP: 192.168.105.2
	I0918 11:52:30.021147    1740 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0918 11:52:31.026067    1740 machine.go:88] provisioning docker machine ...
	I0918 11:52:31.026085    1740 buildroot.go:166] provisioning hostname "addons-221000"
	I0918 11:52:31.026964    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.027231    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.027237    1740 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-221000 && echo "addons-221000" | sudo tee /etc/hostname
	I0918 11:52:31.084404    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-221000
	
	I0918 11:52:31.084473    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.084732    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.084740    1740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-221000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-221000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-221000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 11:52:31.144009    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 11:52:31.144022    1740 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17263-1251/.minikube CaCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17263-1251/.minikube}
	I0918 11:52:31.144033    1740 buildroot.go:174] setting up certificates
	I0918 11:52:31.144038    1740 provision.go:83] configureAuth start
	I0918 11:52:31.144042    1740 provision.go:138] copyHostCerts
	I0918 11:52:31.144138    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem (1082 bytes)
	I0918 11:52:31.144342    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem (1123 bytes)
	I0918 11:52:31.144435    1740 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem (1679 bytes)
	I0918 11:52:31.144503    1740 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem org=jenkins.addons-221000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-221000]
	I0918 11:52:31.225327    1740 provision.go:172] copyRemoteCerts
	I0918 11:52:31.225385    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 11:52:31.225394    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.256352    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 11:52:31.263330    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 11:52:31.270197    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 11:52:31.276715    1740 provision.go:86] duration metric: configureAuth took 132.670667ms
	I0918 11:52:31.276723    1740 buildroot.go:189] setting minikube options for container-runtime
	I0918 11:52:31.276820    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:52:31.276857    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.277075    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.277080    1740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 11:52:31.337901    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 11:52:31.337910    1740 buildroot.go:70] root file system type: tmpfs
	I0918 11:52:31.337970    1740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 11:52:31.338012    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.338275    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.338315    1740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 11:52:31.400816    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 11:52:31.400863    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.401116    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.401126    1740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 11:52:31.746670    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 11:52:31.746684    1740 machine.go:91] provisioned docker machine in 720.613ms
	I0918 11:52:31.746690    1740 client.go:171] LocalClient.Create took 14.585099291s
	I0918 11:52:31.746703    1740 start.go:167] duration metric: libmachine.API.Create for "addons-221000" took 14.585173417s
	I0918 11:52:31.746707    1740 start.go:300] post-start starting for "addons-221000" (driver="qemu2")
	I0918 11:52:31.746711    1740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 11:52:31.746780    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 11:52:31.746790    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.775601    1740 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 11:52:31.776975    1740 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 11:52:31.776983    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/addons for local assets ...
	I0918 11:52:31.777055    1740 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/files for local assets ...
	I0918 11:52:31.777083    1740 start.go:303] post-start completed in 30.374292ms
	I0918 11:52:31.777437    1740 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/config.json ...
	I0918 11:52:31.777603    1740 start.go:128] duration metric: createHost completed in 14.974815417s
	I0918 11:52:31.777667    1740 main.go:141] libmachine: Using SSH client type: native
	I0918 11:52:31.777884    1740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c60760] 0x100c62ed0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 11:52:31.777888    1740 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0918 11:52:31.833629    1740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695063151.431021085
	
	I0918 11:52:31.833635    1740 fix.go:206] guest clock: 1695063151.431021085
	I0918 11:52:31.833638    1740 fix.go:219] Guest: 2023-09-18 11:52:31.431021085 -0700 PDT Remote: 2023-09-18 11:52:31.777608 -0700 PDT m=+15.083726834 (delta=-346.586915ms)
	I0918 11:52:31.833654    1740 fix.go:190] guest clock delta is within tolerance: -346.586915ms
	I0918 11:52:31.833656    1740 start.go:83] releasing machines lock for "addons-221000", held for 15.0309195s
	I0918 11:52:31.833905    1740 ssh_runner.go:195] Run: cat /version.json
	I0918 11:52:31.833915    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.833930    1740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 11:52:31.833973    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:52:31.902596    1740 ssh_runner.go:195] Run: systemctl --version
	I0918 11:52:31.904777    1740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 11:52:31.906638    1740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 11:52:31.906668    1740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 11:52:31.911697    1740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 11:52:31.911704    1740 start.go:469] detecting cgroup driver to use...
	I0918 11:52:31.911799    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 11:52:31.917320    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0918 11:52:31.920500    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 11:52:31.923811    1740 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 11:52:31.923843    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 11:52:31.926950    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 11:52:31.929666    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 11:52:31.932680    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 11:52:31.936002    1740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 11:52:31.939362    1740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 11:52:31.942186    1740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 11:52:31.944734    1740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 11:52:31.947664    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:32.027464    1740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 11:52:32.036559    1740 start.go:469] detecting cgroup driver to use...
	I0918 11:52:32.036614    1740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 11:52:32.042440    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 11:52:32.047583    1740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 11:52:32.053840    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 11:52:32.058225    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 11:52:32.062305    1740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 11:52:32.098720    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 11:52:32.103440    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 11:52:32.108844    1740 ssh_runner.go:195] Run: which cri-dockerd
	I0918 11:52:32.110173    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 11:52:32.112731    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 11:52:32.117532    1740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 11:52:32.194769    1740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 11:52:32.269401    1740 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 11:52:32.269417    1740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0918 11:52:32.274373    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:32.355030    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 11:52:33.517984    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1629475s)
	I0918 11:52:33.518044    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 11:52:33.595160    1740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 11:52:33.670332    1740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 11:52:33.746578    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:33.822625    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 11:52:33.829925    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:33.909957    1740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0918 11:52:33.933398    1740 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 11:52:33.933487    1740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 11:52:33.935757    1740 start.go:537] Will wait 60s for crictl version
	I0918 11:52:33.935801    1740 ssh_runner.go:195] Run: which crictl
	I0918 11:52:33.937147    1740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 11:52:33.952602    1740 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0918 11:52:33.952673    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 11:52:33.962082    1740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 11:52:33.975334    1740 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0918 11:52:33.975416    1740 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0918 11:52:33.976970    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 11:52:33.980853    1740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:33.980897    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 11:52:33.986139    1740 docker.go:636] Got preloaded images: 
	I0918 11:52:33.986147    1740 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0918 11:52:33.986189    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 11:52:33.988954    1740 ssh_runner.go:195] Run: which lz4
	I0918 11:52:33.990479    1740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0918 11:52:33.991766    1740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 11:52:33.991780    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0918 11:52:35.310513    1740 docker.go:600] Took 1.320057 seconds to copy over tarball
	I0918 11:52:35.310582    1740 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 11:52:36.348518    1740 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.03793175s)
	I0918 11:52:36.348535    1740 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 11:52:36.364745    1740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 11:52:36.368295    1740 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0918 11:52:36.373429    1740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 11:52:36.450305    1740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 11:52:38.940530    1740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.490231167s)
	I0918 11:52:38.940627    1740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 11:52:38.946699    1740 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 11:52:38.946709    1740 cache_images.go:84] Images are preloaded, skipping loading
	I0918 11:52:38.946766    1740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 11:52:38.954428    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:38.954439    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:38.954458    1740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 11:52:38.954467    1740 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-221000 NodeName:addons-221000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 11:52:38.954540    1740 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-221000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 11:52:38.954592    1740 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-221000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 11:52:38.954661    1740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 11:52:38.957531    1740 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 11:52:38.957562    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 11:52:38.960385    1740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0918 11:52:38.965592    1740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 11:52:38.970301    1740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0918 11:52:38.975165    1740 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0918 11:52:38.976400    1740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 11:52:38.980255    1740 certs.go:56] Setting up /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000 for IP: 192.168.105.2
	I0918 11:52:38.980276    1740 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac81ee65979b8c4f5ece6243c3a0190531689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:38.980470    1740 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key
	I0918 11:52:39.170828    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt ...
	I0918 11:52:39.170844    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt: {Name:mk0f303ee67627c25d1d04e1887861f15cdad763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.171150    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key ...
	I0918 11:52:39.171155    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key: {Name:mkc5e20e8161cfdcfc3d5dcd8300765ea2c12112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.171271    1740 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key
	I0918 11:52:39.287022    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt ...
	I0918 11:52:39.287027    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt: {Name:mk54c49c3c44ff09930e6c0f57238b89cff4c5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.287171    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key ...
	I0918 11:52:39.287173    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key: {Name:mk05faae5769358f82565f32c1f37a244f2478c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.287315    1740 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key
	I0918 11:52:39.287337    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt with IP's: []
	I0918 11:52:39.376234    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt ...
	I0918 11:52:39.376241    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: {Name:mkc8e654c6f2522197f557cb47d266f15eebaadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.376467    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key ...
	I0918 11:52:39.376471    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.key: {Name:mkf345dd56f86115b31ecd965617f4c21d6a0cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.376571    1740 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969
	I0918 11:52:39.376580    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 11:52:39.429944    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 ...
	I0918 11:52:39.429952    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969: {Name:mkd69eb587bd0dc6ccdbaa88b78f4f92f2b47b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.430095    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969 ...
	I0918 11:52:39.430098    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969: {Name:mk76ea3a7fbbef2305f74e52afdf06cda921c8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.430199    1740 certs.go:337] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt
	I0918 11:52:39.430382    1740 certs.go:341] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key
	I0918 11:52:39.430499    1740 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key
	I0918 11:52:39.430509    1740 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt with IP's: []
	I0918 11:52:39.698555    1740 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt ...
	I0918 11:52:39.698563    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt: {Name:mk6d7a924ed10f0012b290ec4e0ea6bf1b7bfc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.698767    1740 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key ...
	I0918 11:52:39.698773    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key: {Name:mk8d78e9179e4c57e4602e98d4fc6a37885b4d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:39.699037    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 11:52:39.699062    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem (1082 bytes)
	I0918 11:52:39.699081    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem (1123 bytes)
	I0918 11:52:39.699100    1740 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem (1679 bytes)
	I0918 11:52:39.699418    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 11:52:39.707305    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 11:52:39.713956    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 11:52:39.720614    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 11:52:39.727643    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 11:52:39.734609    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 11:52:39.741305    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 11:52:39.748141    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 11:52:39.755277    1740 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 11:52:39.762025    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 11:52:39.767701    1740 ssh_runner.go:195] Run: openssl version
	I0918 11:52:39.769871    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 11:52:39.772909    1740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.774536    1740 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.774555    1740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 11:52:39.776417    1740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 11:52:39.779698    1740 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 11:52:39.781162    1740 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 11:52:39.781200    1740 kubeadm.go:404] StartCluster: {Name:addons-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:addons-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:52:39.781263    1740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 11:52:39.787256    1740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 11:52:39.790167    1740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 11:52:39.792879    1740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 11:52:39.795890    1740 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 11:52:39.795906    1740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 11:52:39.820130    1740 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 11:52:39.820157    1740 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 11:52:39.874262    1740 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 11:52:39.874320    1740 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 11:52:39.874401    1740 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 11:52:39.936649    1740 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 11:52:39.946863    1740 out.go:204]   - Generating certificates and keys ...
	I0918 11:52:39.946906    1740 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 11:52:39.946940    1740 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 11:52:40.057135    1740 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 11:52:40.267412    1740 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 11:52:40.415260    1740 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 11:52:40.592293    1740 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 11:52:40.714190    1740 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 11:52:40.714252    1740 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-221000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 11:52:40.818329    1740 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 11:52:40.818397    1740 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-221000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 11:52:41.068370    1740 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 11:52:41.110794    1740 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 11:52:41.218301    1740 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 11:52:41.218335    1740 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 11:52:41.282421    1740 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 11:52:41.650315    1740 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 11:52:41.733907    1740 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 11:52:41.925252    1740 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 11:52:41.925561    1740 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 11:52:41.927413    1740 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 11:52:41.931680    1740 out.go:204]   - Booting up control plane ...
	I0918 11:52:41.931754    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 11:52:41.931794    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 11:52:41.931831    1740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 11:52:41.935171    1740 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 11:52:41.935565    1740 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 11:52:41.935586    1740 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 11:52:42.024365    1740 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 11:52:45.527756    1740 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.503453 seconds
	I0918 11:52:45.527852    1740 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 11:52:45.533290    1740 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 11:52:46.043984    1740 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 11:52:46.044088    1740 kubeadm.go:322] [mark-control-plane] Marking the node addons-221000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 11:52:46.548611    1740 kubeadm.go:322] [bootstrap-token] Using token: 0otx18.vbdfa1zgl84pbc1n
	I0918 11:52:46.552403    1740 out.go:204]   - Configuring RBAC rules ...
	I0918 11:52:46.552463    1740 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 11:52:46.553348    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 11:52:46.557357    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 11:52:46.558552    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 11:52:46.559879    1740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 11:52:46.560890    1740 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 11:52:46.567944    1740 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 11:52:46.739677    1740 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 11:52:46.956246    1740 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 11:52:46.958050    1740 kubeadm.go:322] 
	I0918 11:52:46.958085    1740 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 11:52:46.958096    1740 kubeadm.go:322] 
	I0918 11:52:46.958138    1740 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 11:52:46.958143    1740 kubeadm.go:322] 
	I0918 11:52:46.958157    1740 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 11:52:46.958186    1740 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 11:52:46.958221    1740 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 11:52:46.958226    1740 kubeadm.go:322] 
	I0918 11:52:46.958261    1740 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 11:52:46.958267    1740 kubeadm.go:322] 
	I0918 11:52:46.958291    1740 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 11:52:46.958294    1740 kubeadm.go:322] 
	I0918 11:52:46.958317    1740 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 11:52:46.958375    1740 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 11:52:46.958411    1740 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 11:52:46.958416    1740 kubeadm.go:322] 
	I0918 11:52:46.958458    1740 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 11:52:46.958503    1740 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 11:52:46.958507    1740 kubeadm.go:322] 
	I0918 11:52:46.958562    1740 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0otx18.vbdfa1zgl84pbc1n \
	I0918 11:52:46.958623    1740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 \
	I0918 11:52:46.958634    1740 kubeadm.go:322] 	--control-plane 
	I0918 11:52:46.958636    1740 kubeadm.go:322] 
	I0918 11:52:46.958676    1740 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 11:52:46.958681    1740 kubeadm.go:322] 
	I0918 11:52:46.958735    1740 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0otx18.vbdfa1zgl84pbc1n \
	I0918 11:52:46.958805    1740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 
	I0918 11:52:46.958862    1740 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 11:52:46.958868    1740 cni.go:84] Creating CNI manager for ""
	I0918 11:52:46.958880    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:46.967096    1740 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 11:52:46.970213    1740 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 11:52:46.973479    1740 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0918 11:52:46.977991    1740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 11:52:46.978031    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:46.978044    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=addons-221000 minikube.k8s.io/updated_at=2023_09_18T11_52_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.038282    1740 ops.go:34] apiserver oom_adj: -16
	I0918 11:52:47.038334    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.073908    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:47.620721    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:48.118781    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:48.620650    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:49.120663    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:49.620741    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:50.120032    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:50.620638    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:51.118777    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:51.618788    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:52.120234    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:52.619043    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:53.120606    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:53.619266    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:54.118979    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:54.618916    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:55.120620    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:55.620599    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:56.120585    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:56.618629    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:57.120587    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:57.618783    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:58.120645    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:58.620559    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:59.120556    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:52:59.619252    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:53:00.118796    1740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 11:53:00.169662    1740 kubeadm.go:1081] duration metric: took 13.191788791s to wait for elevateKubeSystemPrivileges.
	I0918 11:53:00.169677    1740 kubeadm.go:406] StartCluster complete in 20.38867075s
	I0918 11:53:00.169687    1740 settings.go:142] acquiring lock: {Name:mke420f28dda4f7a752738b3e6d571dc4216779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:53:00.169849    1740 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:53:00.170110    1740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/kubeconfig: {Name:mk07020c5b974cf07ca0cda25f72a521eb245fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:53:00.170308    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 11:53:00.170433    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:53:00.170379    1740 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0918 11:53:00.170512    1740 addons.go:69] Setting ingress=true in profile "addons-221000"
	I0918 11:53:00.170517    1740 addons.go:69] Setting ingress-dns=true in profile "addons-221000"
	I0918 11:53:00.170520    1740 addons.go:231] Setting addon ingress=true in "addons-221000"
	I0918 11:53:00.170523    1740 addons.go:231] Setting addon ingress-dns=true in "addons-221000"
	I0918 11:53:00.170533    1740 addons.go:69] Setting metrics-server=true in profile "addons-221000"
	I0918 11:53:00.170538    1740 addons.go:231] Setting addon metrics-server=true in "addons-221000"
	I0918 11:53:00.170552    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170556    1740 addons.go:69] Setting inspektor-gadget=true in profile "addons-221000"
	I0918 11:53:00.170560    1740 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-221000"
	I0918 11:53:00.170564    1740 addons.go:69] Setting gcp-auth=true in profile "addons-221000"
	I0918 11:53:00.170569    1740 mustload.go:65] Loading cluster: addons-221000
	I0918 11:53:00.170573    1740 addons.go:69] Setting registry=true in profile "addons-221000"
	I0918 11:53:00.170577    1740 addons.go:231] Setting addon registry=true in "addons-221000"
	I0918 11:53:00.170587    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170595    1740 addons.go:69] Setting default-storageclass=true in profile "addons-221000"
	I0918 11:53:00.170618    1740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-221000"
	I0918 11:53:00.170629    1740 addons.go:69] Setting storage-provisioner=true in profile "addons-221000"
	I0918 11:53:00.170639    1740 config.go:182] Loaded profile config "addons-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 11:53:00.170657    1740 addons.go:231] Setting addon storage-provisioner=true in "addons-221000"
	I0918 11:53:00.170705    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.170820    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.170825    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.170829    1740 addons.go:277] "addons-221000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0918 11:53:00.170831    1740 addons.go:277] "addons-221000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0918 11:53:00.170552    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170834    1740 addons.go:467] Verifying addon registry=true in "addons-221000"
	I0918 11:53:00.170556    1740 addons.go:69] Setting cloud-spanner=true in profile "addons-221000"
	I0918 11:53:00.175435    1740 out.go:177] * Verifying registry addon...
	I0918 11:53:00.170560    1740 addons.go:231] Setting addon inspektor-gadget=true in "addons-221000"
	I0918 11:53:00.170569    1740 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-221000"
	I0918 11:53:00.170865    1740 addons.go:231] Setting addon cloud-spanner=true in "addons-221000"
	I0918 11:53:00.170553    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.170512    1740 addons.go:69] Setting volumesnapshots=true in profile "addons-221000"
	W0918 11:53:00.171149    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	W0918 11:53:00.171162    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	I0918 11:53:00.171465    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.188489    1740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 11:53:00.182591    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.182599    1740 addons.go:277] "addons-221000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0918 11:53:00.182603    1740 addons_storage_classes.go:55] "addons-221000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0918 11:53:00.182614    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.182623    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.182641    1740 addons.go:231] Setting addon volumesnapshots=true in "addons-221000"
	I0918 11:53:00.183250    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 11:53:00.196542    1740 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0918 11:53:00.192833    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:00.192879    1740 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 11:53:00.192896    1740 addons.go:467] Verifying addon ingress=true in "addons-221000"
	I0918 11:53:00.192903    1740 addons.go:231] Setting addon default-storageclass=true in "addons-221000"
	W0918 11:53:00.193202    1740 host.go:54] host status for "addons-221000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/monitor: connect: connection refused
	I0918 11:53:00.196510    1740 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-221000" context rescaled to 1 replicas
	I0918 11:53:00.199509    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 11:53:00.199541    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 11:53:00.199554    1740 host.go:66] Checking if "addons-221000" exists ...
	W0918 11:53:00.199564    1740 addons.go:277] "addons-221000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0918 11:53:00.206530    1740 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0918 11:53:00.210356    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 11:53:00.210367    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.210376    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 11:53:00.210391    1740 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 11:53:00.211130    1740 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 11:53:00.213481    1740 out.go:177] * Verifying ingress addon...
	I0918 11:53:00.214506    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 11:53:00.214532    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.214630    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 11:53:00.218827    1740 out.go:177] * Verifying Kubernetes components...
	I0918 11:53:00.221560    1740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 11:53:00.218836    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.229444    1740 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 11:53:00.229456    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 11:53:00.217515    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 11:53:00.229466    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 11:53:00.229467    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.229472    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.220086    1740 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 11:53:00.221623    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 11:53:00.236499    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 11:53:00.234112    1740 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 11:53:00.242380    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 11:53:00.251491    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 11:53:00.252703    1740 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 11:53:00.260396    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 11:53:00.267323    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 11:53:00.274307    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 11:53:00.284458    1740 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 11:53:00.287512    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 11:53:00.287521    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 11:53:00.287531    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:00.322513    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 11:53:00.322523    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 11:53:00.336060    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 11:53:00.340017    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 11:53:00.346686    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 11:53:00.346693    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 11:53:00.362466    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 11:53:00.362477    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 11:53:00.363923    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 11:53:00.363928    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 11:53:00.368153    1740 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 11:53:00.368161    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 11:53:00.376728    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 11:53:00.376740    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 11:53:00.395920    1740 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 11:53:00.395931    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 11:53:00.403313    1740 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 11:53:00.403320    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 11:53:00.404660    1740 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 11:53:00.404665    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 11:53:00.430003    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 11:53:00.430014    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 11:53:00.492665    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 11:53:00.492677    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 11:53:00.495149    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 11:53:00.495157    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 11:53:00.500095    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 11:53:00.522789    1740 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 11:53:00.522799    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 11:53:00.561433    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 11:53:00.561445    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 11:53:00.561578    1740 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:00.561584    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 11:53:00.579630    1740 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 11:53:00.579641    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 11:53:00.604202    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:00.607378    1740 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 11:53:00.607389    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 11:53:00.624742    1740 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 11:53:00.624755    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 11:53:00.641432    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 11:53:00.641441    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 11:53:00.675716    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 11:53:00.675728    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 11:53:00.684962    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 11:53:00.684971    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 11:53:00.690643    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 11:53:00.690652    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 11:53:00.691887    1740 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 11:53:00.691893    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 11:53:00.701987    1740 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 11:53:00.701999    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 11:53:00.719094    1740 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 11:53:00.719103    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0918 11:53:00.821368    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 11:53:00.824330    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 11:53:01.167637    1740 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0918 11:53:01.168100    1740 node_ready.go:35] waiting up to 6m0s for node "addons-221000" to be "Ready" ...
	I0918 11:53:01.169940    1740 node_ready.go:49] node "addons-221000" has status "Ready":"True"
	I0918 11:53:01.169963    1740 node_ready.go:38] duration metric: took 1.836333ms waiting for node "addons-221000" to be "Ready" ...
	I0918 11:53:01.169968    1740 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 11:53:01.172987    1740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:01.659785    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.159683334s)
	I0918 11:53:01.659805    1740 addons.go:467] Verifying addon metrics-server=true in "addons-221000"
	I0918 11:53:01.659835    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.055627667s)
	W0918 11:53:01.659861    1740 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 11:53:01.659882    1740 retry.go:31] will retry after 288.841008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 11:53:01.660144    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320131291s)
	I0918 11:53:01.949280    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 11:53:02.183526    1740 pod_ready.go:92] pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.183539    1740 pod_ready.go:81] duration metric: took 1.010553542s waiting for pod "coredns-5dd5756b68-mbgns" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.183545    1740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.184891    1740 pod_ready.go:97] error getting pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z4cmf" not found
	I0918 11:53:02.184904    1740 pod_ready.go:81] duration metric: took 1.354709ms waiting for pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace to be "Ready" ...
	E0918 11:53:02.184909    1740 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-z4cmf" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-z4cmf" not found
	I0918 11:53:02.184914    1740 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.187700    1740 pod_ready.go:92] pod "etcd-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.187709    1740 pod_ready.go:81] duration metric: took 2.791875ms waiting for pod "etcd-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.187714    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.190228    1740 pod_ready.go:92] pod "kube-apiserver-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.190236    1740 pod_ready.go:81] duration metric: took 2.518208ms waiting for pod "kube-apiserver-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.190240    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.571748    1740 pod_ready.go:92] pod "kube-controller-manager-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.571761    1740 pod_ready.go:81] duration metric: took 381.518375ms waiting for pod "kube-controller-manager-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.571765    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7gqn" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.823208    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.998874458s)
	I0918 11:53:02.823230    1740 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-221000"
	I0918 11:53:02.828329    1740 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 11:53:02.838784    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 11:53:02.842760    1740 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 11:53:02.842767    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:02.847417    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:02.972129    1740 pod_ready.go:92] pod "kube-proxy-q7gqn" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:02.972138    1740 pod_ready.go:81] duration metric: took 400.3735ms waiting for pod "kube-proxy-q7gqn" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:02.972143    1740 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:03.351863    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:03.371796    1740 pod_ready.go:92] pod "kube-scheduler-addons-221000" in "kube-system" namespace has status "Ready":"True"
	I0918 11:53:03.371806    1740 pod_ready.go:81] duration metric: took 399.662875ms waiting for pod "kube-scheduler-addons-221000" in "kube-system" namespace to be "Ready" ...
	I0918 11:53:03.371810    1740 pod_ready.go:38] duration metric: took 2.201856875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 11:53:03.371820    1740 api_server.go:52] waiting for apiserver process to appear ...
	I0918 11:53:03.371875    1740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 11:53:03.851785    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:04.352157    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:04.679993    1740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73071775s)
	I0918 11:53:04.680002    1740 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.308127625s)
	I0918 11:53:04.680021    1740 api_server.go:72] duration metric: took 4.46545525s to wait for apiserver process to appear ...
	I0918 11:53:04.680025    1740 api_server.go:88] waiting for apiserver healthz status ...
	I0918 11:53:04.680031    1740 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0918 11:53:04.683995    1740 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0918 11:53:04.684750    1740 api_server.go:141] control plane version: v1.28.2
	I0918 11:53:04.684756    1740 api_server.go:131] duration metric: took 4.728917ms to wait for apiserver health ...
	I0918 11:53:04.684760    1740 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 11:53:04.689157    1740 system_pods.go:59] 13 kube-system pods found
	I0918 11:53:04.689166    1740 system_pods.go:61] "coredns-5dd5756b68-mbgns" [376db80e-bef7-49a8-805c-d250bbb5ddc5] Running
	I0918 11:53:04.689171    1740 system_pods.go:61] "csi-hostpath-attacher-0" [b3eb2340-f156-4127-b844-79013849b5d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 11:53:04.689175    1740 system_pods.go:61] "csi-hostpath-resizer-0" [a39fc7e1-21e6-43e0-8a71-76fc3122aa67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 11:53:04.689182    1740 system_pods.go:61] "csi-hostpathplugin-s878j" [f7db805b-46b1-4d4f-b620-4bb9732a0ba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 11:53:04.689185    1740 system_pods.go:61] "etcd-addons-221000" [4500c338-1b0c-4d39-b5b6-76d42cf285f5] Running
	I0918 11:53:04.689188    1740 system_pods.go:61] "kube-apiserver-addons-221000" [93a9cc2e-3463-4332-a37c-86437106ed5e] Running
	I0918 11:53:04.689190    1740 system_pods.go:61] "kube-controller-manager-addons-221000" [eaeef50f-2b51-43b2-91c9-a4f97f5460ae] Running
	I0918 11:53:04.689193    1740 system_pods.go:61] "kube-proxy-q7gqn" [e971c33c-7d1b-47b7-9ff5-3a629f12fb57] Running
	I0918 11:53:04.689195    1740 system_pods.go:61] "kube-scheduler-addons-221000" [9d39af63-6f0a-4855-9435-f8e7af26869e] Running
	I0918 11:53:04.689199    1740 system_pods.go:61] "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 11:53:04.689205    1740 system_pods.go:61] "snapshot-controller-58dbcc7b99-89j9m" [d9a96c2a-2231-4dea-abbf-16875dd2b1d7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.689210    1740 system_pods.go:61] "snapshot-controller-58dbcc7b99-xwwxn" [5a80446c-a3a8-4ce7-8ac4-2894087691fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.689214    1740 system_pods.go:61] "storage-provisioner" [88ff8527-97e2-4317-8d5b-a2502e8cb7f7] Running
	I0918 11:53:04.689217    1740 system_pods.go:74] duration metric: took 4.45475ms to wait for pod list to return data ...
	I0918 11:53:04.689220    1740 default_sa.go:34] waiting for default service account to be created ...
	I0918 11:53:04.690951    1740 default_sa.go:45] found service account: "default"
	I0918 11:53:04.690958    1740 default_sa.go:55] duration metric: took 1.736083ms for default service account to be created ...
	I0918 11:53:04.690961    1740 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 11:53:04.694962    1740 system_pods.go:86] 13 kube-system pods found
	I0918 11:53:04.694971    1740 system_pods.go:89] "coredns-5dd5756b68-mbgns" [376db80e-bef7-49a8-805c-d250bbb5ddc5] Running
	I0918 11:53:04.694976    1740 system_pods.go:89] "csi-hostpath-attacher-0" [b3eb2340-f156-4127-b844-79013849b5d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 11:53:04.694979    1740 system_pods.go:89] "csi-hostpath-resizer-0" [a39fc7e1-21e6-43e0-8a71-76fc3122aa67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 11:53:04.694983    1740 system_pods.go:89] "csi-hostpathplugin-s878j" [f7db805b-46b1-4d4f-b620-4bb9732a0ba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 11:53:04.694986    1740 system_pods.go:89] "etcd-addons-221000" [4500c338-1b0c-4d39-b5b6-76d42cf285f5] Running
	I0918 11:53:04.694988    1740 system_pods.go:89] "kube-apiserver-addons-221000" [93a9cc2e-3463-4332-a37c-86437106ed5e] Running
	I0918 11:53:04.694990    1740 system_pods.go:89] "kube-controller-manager-addons-221000" [eaeef50f-2b51-43b2-91c9-a4f97f5460ae] Running
	I0918 11:53:04.694994    1740 system_pods.go:89] "kube-proxy-q7gqn" [e971c33c-7d1b-47b7-9ff5-3a629f12fb57] Running
	I0918 11:53:04.694996    1740 system_pods.go:89] "kube-scheduler-addons-221000" [9d39af63-6f0a-4855-9435-f8e7af26869e] Running
	I0918 11:53:04.694999    1740 system_pods.go:89] "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 11:53:04.695003    1740 system_pods.go:89] "snapshot-controller-58dbcc7b99-89j9m" [d9a96c2a-2231-4dea-abbf-16875dd2b1d7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.695006    1740 system_pods.go:89] "snapshot-controller-58dbcc7b99-xwwxn" [5a80446c-a3a8-4ce7-8ac4-2894087691fb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 11:53:04.695009    1740 system_pods.go:89] "storage-provisioner" [88ff8527-97e2-4317-8d5b-a2502e8cb7f7] Running
	I0918 11:53:04.695012    1740 system_pods.go:126] duration metric: took 4.049ms to wait for k8s-apps to be running ...
	I0918 11:53:04.695014    1740 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 11:53:04.695074    1740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 11:53:04.700754    1740 system_svc.go:56] duration metric: took 5.736541ms WaitForService to wait for kubelet.
	I0918 11:53:04.700761    1740 kubeadm.go:581] duration metric: took 4.486195916s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 11:53:04.700771    1740 node_conditions.go:102] verifying NodePressure condition ...
	I0918 11:53:04.702224    1740 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0918 11:53:04.702234    1740 node_conditions.go:123] node cpu capacity is 2
	I0918 11:53:04.702239    1740 node_conditions.go:105] duration metric: took 1.466083ms to run NodePressure ...
	I0918 11:53:04.702244    1740 start.go:228] waiting for startup goroutines ...
	I0918 11:53:04.851794    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:05.352908    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:05.851843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.351684    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.787954    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 11:53:06.787972    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:06.819465    1740 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 11:53:06.824867    1740 addons.go:231] Setting addon gcp-auth=true in "addons-221000"
	I0918 11:53:06.824887    1740 host.go:66] Checking if "addons-221000" exists ...
	I0918 11:53:06.825624    1740 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 11:53:06.825631    1740 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/addons-221000/id_rsa Username:docker}
	I0918 11:53:06.851875    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:06.860327    1740 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 11:53:06.868318    1740 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0918 11:53:06.871302    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 11:53:06.871308    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 11:53:06.876186    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 11:53:06.876191    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 11:53:06.881437    1740 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 11:53:06.881443    1740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0918 11:53:06.887709    1740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 11:53:07.124491    1740 addons.go:467] Verifying addon gcp-auth=true in "addons-221000"
	I0918 11:53:07.129030    1740 out.go:177] * Verifying gcp-auth addon...
	I0918 11:53:07.137330    1740 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 11:53:07.139319    1740 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 11:53:07.139325    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.142056    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.351786    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:07.647787    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:07.851746    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:08.144923    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:08.351959    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:08.646023    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:08.851939    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:09.146053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:09.352896    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:09.645776    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:09.851792    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:10.146843    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:10.351428    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:10.645894    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:10.852236    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:11.145951    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:11.352216    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:11.645774    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:11.852232    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:12.145929    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:12.355119    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:12.840629    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:12.851603    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:13.145852    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:13.351962    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:13.646245    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:13.852112    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:14.144894    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:14.351204    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:14.646046    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.059939    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:15.145929    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.351906    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:15.646033    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:15.852022    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:16.145893    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:16.351976    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:16.645740    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:16.852007    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:17.145547    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:17.352003    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:17.646147    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:17.853011    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:18.145960    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:18.353519    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:18.646257    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:18.851829    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:19.143968    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:19.351778    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:19.645563    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:19.851678    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:20.145637    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:20.351425    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:20.645727    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:20.852080    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:21.145053    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:21.352010    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:21.645631    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:21.851983    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:22.146102    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:22.351559    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:22.645364    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:22.851995    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:23.145511    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:23.351664    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:23.646427    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:23.851813    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:24.144653    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:24.351670    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:24.645732    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:24.851659    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:25.145755    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:25.350286    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:25.645572    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:25.851968    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:26.145692    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:26.352042    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:26.645575    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:26.852307    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:27.145498    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:27.352114    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:27.645719    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:27.851965    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:28.145987    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:28.351772    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:28.645594    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:28.851993    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:29.145747    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:29.351502    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:29.645697    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:29.851953    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:30.145495    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:30.351423    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:30.645330    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:30.852142    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:31.145422    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:31.352025    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:31.645572    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:31.852021    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:32.145742    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:32.351596    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:32.645761    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:32.851846    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:33.145666    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:33.351607    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:33.646262    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:33.852115    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:34.145556    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:34.353136    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:34.644719    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:34.852049    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:35.145343    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:35.351551    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:35.645361    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:35.851476    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:36.143766    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:36.351679    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:36.645788    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:36.851584    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:37.145933    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:37.351445    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 11:53:37.646034    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:37.851557    1740 kapi.go:107] duration metric: took 35.013102459s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 11:53:38.145661    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:38.645759    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:39.145382    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:39.645858    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:40.145851    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:40.645842    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:41.145511    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:41.646176    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:42.145937    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:42.645521    1740 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 11:53:43.145380    1740 kapi.go:107] duration metric: took 36.008387458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 11:53:43.148984    1740 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-221000 cluster.
	I0918 11:53:43.152958    1740 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 11:53:43.156871    1740 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 11:59:00.190235    1740 kapi.go:107] duration metric: took 6m0.011650291s to wait for kubernetes.io/minikube-addons=registry ...
	W0918 11:59:00.190354    1740 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0918 11:59:00.236861    1740 kapi.go:107] duration metric: took 6m0.007418791s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0918 11:59:00.236887    1740 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0918 11:59:00.245622    1740 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, inspektor-gadget, volumesnapshots, csi-hostpath-driver, gcp-auth
	I0918 11:59:00.252559    1740 addons.go:502] enable addons completed in 6m0.086876584s: enabled=[ingress-dns cloud-spanner default-storageclass metrics-server storage-provisioner inspektor-gadget volumesnapshots csi-hostpath-driver gcp-auth]
	I0918 11:59:00.252570    1740 start.go:233] waiting for cluster config update ...
	I0918 11:59:00.252577    1740 start.go:242] writing updated cluster config ...
	I0918 11:59:00.253046    1740 ssh_runner.go:195] Run: rm -f paused
	I0918 11:59:00.282509    1740 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0918 11:59:00.285533    1740 out.go:177] * Done! kubectl is now configured to use "addons-221000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-18 18:52:28 UTC, ends at Mon 2023-09-18 19:12:24 UTC. --
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.378956054Z" level=info msg="shim disconnected" id=db4658d155f2578fc668ba94399fd4b64164a06274f25bb873a3470c521d2b3b namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.378988221Z" level=warning msg="cleaning up after shim disconnected" id=db4658d155f2578fc668ba94399fd4b64164a06274f25bb873a3470c521d2b3b namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.378992929Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1100]: time="2023-09-18T19:12:04.436636476Z" level=info msg="ignoring event" container=f31f590bd070326784aa894bb3b130e04a21f13ff6b7835f98e9c9c994be5c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.437103975Z" level=info msg="shim disconnected" id=f31f590bd070326784aa894bb3b130e04a21f13ff6b7835f98e9c9c994be5c89 namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.437132058Z" level=warning msg="cleaning up after shim disconnected" id=f31f590bd070326784aa894bb3b130e04a21f13ff6b7835f98e9c9c994be5c89 namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.437137642Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:04 addons-221000 dockerd[1106]: time="2023-09-18T19:12:04.441266547Z" level=warning msg="cleanup warnings time=\"2023-09-18T19:12:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1100]: time="2023-09-18T19:12:10.222057782Z" level=info msg="ignoring event" container=ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.222121782Z" level=info msg="shim disconnected" id=ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.222148615Z" level=warning msg="cleaning up after shim disconnected" id=ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.222152574Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1100]: time="2023-09-18T19:12:10.284872898Z" level=info msg="ignoring event" container=b759b3bc87e672ab6709e17bb94b6e35eee5335882a2a4303af5bb7802b4d324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.284977606Z" level=info msg="shim disconnected" id=b759b3bc87e672ab6709e17bb94b6e35eee5335882a2a4303af5bb7802b4d324 namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.285020689Z" level=warning msg="cleaning up after shim disconnected" id=b759b3bc87e672ab6709e17bb94b6e35eee5335882a2a4303af5bb7802b4d324 namespace=moby
	Sep 18 19:12:10 addons-221000 dockerd[1106]: time="2023-09-18T19:12:10.285025106Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:10 addons-221000 cri-dockerd[995]: time="2023-09-18T19:12:10Z" level=error msg="EOF Failed to get stats from container ab33a0f8756c5abdbe52f99802252c9d88620a35e7026b8d48ddc961d9fd608a"
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.398430134Z" level=info msg="shim disconnected" id=afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1100]: time="2023-09-18T19:12:14.398477592Z" level=info msg="ignoring event" container=afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.398687758Z" level=warning msg="cleaning up after shim disconnected" id=afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.398698092Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.419717366Z" level=info msg="shim disconnected" id=a7e6c06852560a1d0bdec45aab8087733475e5299415ad38f796c38431947c7a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.419822949Z" level=warning msg="cleaning up after shim disconnected" id=a7e6c06852560a1d0bdec45aab8087733475e5299415ad38f796c38431947c7a namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1106]: time="2023-09-18T19:12:14.419832116Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:12:14 addons-221000 dockerd[1100]: time="2023-09-18T19:12:14.420080948Z" level=info msg="ignoring event" container=a7e6c06852560a1d0bdec45aab8087733475e5299415ad38f796c38431947c7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID
	e55be7fdce01f       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98          About a minute ago   Running             headlamp                  0                   6eea9635aa263
	6009e365438d8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf   18 minutes ago       Running             gcp-auth                  0                   e321131e2d88d
	96b1e19d34d0f       ba04bb24b9575                                                                                                  19 minutes ago       Running             storage-provisioner       0                   31030bebae5dd
	3397ed73112e1       97e04611ad434                                                                                                  19 minutes ago       Running             coredns                   0                   ecf214ed85c34
	3b8b236037bf7       7da62c127fc0f                                                                                                  19 minutes ago       Running             kube-proxy                0                   efd5b0f304a7c
	17d16f9191cb9       64fc40cee3716                                                                                                  19 minutes ago       Running             kube-scheduler            0                   6dcf2ed48fa0d
	cb85fb8fd00cf       89d57b83c1786                                                                                                  19 minutes ago       Running             kube-controller-manager   0                   d0674da883f4f
	4a8eb16a561d8       30bb499447fe1                                                                                                  19 minutes ago       Running             kube-apiserver            0                   2a8be15cc8448
	2a7ec1fe69df2       9cdd6470f48c8                                                                                                  19 minutes ago       Running             etcd                      0                   0f317e031c491
	
	* 
	* ==> coredns [3397ed73112e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51523 - 34394 "HINFO IN 2890994018648264973.7586751697303130781. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046989834s
	[INFO] 10.244.0.11:51746 - 24273 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125498s
	[INFO] 10.244.0.11:38492 - 22517 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252786s
	[INFO] 10.244.0.11:44956 - 48338 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000035291s
	[INFO] 10.244.0.11:34979 - 63104 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059957s
	[INFO] 10.244.0.11:38551 - 60811 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000037916s
	[INFO] 10.244.0.11:44454 - 13651 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000042082s
	[INFO] 10.244.0.11:41401 - 32727 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001047395s
	[INFO] 10.244.0.11:40924 - 8451 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001040478s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-221000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-221000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=addons-221000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T11_52_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-221000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 18:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-221000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:12:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:11:52 +0000   Mon, 18 Sep 2023 18:52:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-221000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 605bc2fc72a045ae88e907db961da3d3
	  System UUID:                605bc2fc72a045ae88e907db961da3d3
	  Boot ID:                    6a9990c2-fe5e-48d8-97ca-ea50d8c8e3b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-2vm8d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  headlamp                    headlamp-699c48fb74-6fg5z                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 coredns-5dd5756b68-mbgns                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-221000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-221000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-221000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-q7gqn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-221000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node addons-221000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node addons-221000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node addons-221000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m   kubelet          Node addons-221000 status is now: NodeReady
	  Normal  RegisteredNode           19m   node-controller  Node addons-221000 event: Registered Node addons-221000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.641705] EINJ: EINJ table not found.
	[  +0.512113] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044199] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000795] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.076142] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.068934] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.415221] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.169313] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +0.075007] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.084392] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.144805] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.094980] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[  +0.076494] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +0.074272] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.076208] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.088287] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +2.538937] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
	[  +2.473246] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.092849] systemd-fstab-generator[1411]: Ignoring "noauto" for root device
	[  +4.633357] systemd-fstab-generator[2291]: Ignoring "noauto" for root device
	[Sep18 18:53] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.646092] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.523270] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.548658] kauditd_printk_skb: 8 callbacks suppressed
	[Sep18 19:12] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [2a7ec1fe69df] <==
	* {"level":"info","ts":"2023-09-18T18:52:43.023404Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.023573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T18:52:43.02399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-09-18T18:52:43.024241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T18:52:43.02431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-18T18:52:43.024433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.024491Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:52:43.025343Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T18:53:12.839558Z","caller":"traceutil/trace.go:171","msg":"trace[499305584] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:692; }","duration":"193.895541ms","start":"2023-09-18T18:53:12.645653Z","end":"2023-09-18T18:53:12.839549Z","steps":["trace[499305584] 'read index received'  (duration: 193.791791ms)","trace[499305584] 'applied index is now lower than readState.Index'  (duration: 103.417µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:53:12.839607Z","caller":"traceutil/trace.go:171","msg":"trace[593448589] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"309.762625ms","start":"2023-09-18T18:53:12.529838Z","end":"2023-09-18T18:53:12.839601Z","steps":["trace[593448589] 'process raft request'  (duration: 309.63075ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:53:12.839655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.989167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10532"}
	{"level":"info","ts":"2023-09-18T18:53:12.839671Z","caller":"traceutil/trace.go:171","msg":"trace[1312627899] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:671; }","duration":"194.031ms","start":"2023-09-18T18:53:12.645637Z","end":"2023-09-18T18:53:12.839668Z","steps":["trace[1312627899] 'agreement among raft nodes before linearized reading'  (duration: 193.963833ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:53:12.8398Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-18T18:53:12.529832Z","time spent":"309.7915ms","remote":"127.0.0.1:54556","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:669 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-18T18:53:15.058529Z","caller":"traceutil/trace.go:171","msg":"trace[42488278] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:693; }","duration":"207.837416ms","start":"2023-09-18T18:53:14.850683Z","end":"2023-09-18T18:53:15.05852Z","steps":["trace[42488278] 'read index received'  (duration: 207.76525ms)","trace[42488278] 'applied index is now lower than readState.Index'  (duration: 71.708µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:53:15.058651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.966125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:13 size:63849"}
	{"level":"info","ts":"2023-09-18T18:53:15.058696Z","caller":"traceutil/trace.go:171","msg":"trace[2008509018] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:13; response_revision:672; }","duration":"208.018666ms","start":"2023-09-18T18:53:14.850673Z","end":"2023-09-18T18:53:15.058691Z","steps":["trace[2008509018] 'agreement among raft nodes before linearized reading'  (duration: 207.886666ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:53:15.058824Z","caller":"traceutil/trace.go:171","msg":"trace[1590636669] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"214.775084ms","start":"2023-09-18T18:53:14.844046Z","end":"2023-09-18T18:53:15.058821Z","steps":["trace[1590636669] 'process raft request'  (duration: 214.424584ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T19:02:43.443637Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2023-09-18T19:02:43.458272Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1094,"took":"14.26046ms","hash":1108450172}
	{"level":"info","ts":"2023-09-18T19:02:43.458292Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1108450172,"revision":1094,"compact-revision":-1}
	{"level":"info","ts":"2023-09-18T19:07:43.446092Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1454}
	{"level":"info","ts":"2023-09-18T19:07:43.44678Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1454,"took":"494.332µs","hash":2695871163}
	{"level":"info","ts":"2023-09-18T19:07:43.446796Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2695871163,"revision":1454,"compact-revision":1094}
	{"level":"info","ts":"2023-09-18T19:11:06.589478Z","caller":"traceutil/trace.go:171","msg":"trace[847243809] transaction","detail":"{read_only:false; response_revision:2088; number_of_response:1; }","duration":"105.795246ms","start":"2023-09-18T19:11:06.483671Z","end":"2023-09-18T19:11:06.589466Z","steps":["trace[847243809] 'process raft request'  (duration: 76.680577ms)","trace[847243809] 'compare'  (duration: 28.988461ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T19:11:19.569525Z","caller":"traceutil/trace.go:171","msg":"trace[199883351] transaction","detail":"{read_only:false; response_revision:2131; number_of_response:1; }","duration":"161.881881ms","start":"2023-09-18T19:11:19.407634Z","end":"2023-09-18T19:11:19.569516Z","steps":["trace[199883351] 'process raft request'  (duration: 161.812756ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [6009e365438d] <==
	* 2023/09/18 18:53:42 GCP Auth Webhook started!
	2023/09/18 19:11:01 Ready to marshal response ...
	2023/09/18 19:11:01 Ready to write response ...
	2023/09/18 19:11:01 Ready to marshal response ...
	2023/09/18 19:11:01 Ready to write response ...
	2023/09/18 19:11:01 Ready to marshal response ...
	2023/09/18 19:11:01 Ready to write response ...
	2023/09/18 19:11:15 Ready to marshal response ...
	2023/09/18 19:11:15 Ready to write response ...
	2023/09/18 19:11:48 Ready to marshal response ...
	2023/09/18 19:11:48 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:12:25 up 19 min,  0 users,  load average: 0.21, 0.19, 0.17
	Linux addons-221000 5.10.57 #1 SMP PREEMPT Fri Sep 15 19:03:18 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4a8eb16a561d] <==
	* I0918 19:10:44.200176       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:11:01.618081       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.100.41"}
	I0918 19:11:27.087628       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:11:44.199807       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.823931       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.823948       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.825604       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.825722       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.833256       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.833284       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.846841       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.846862       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.847044       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.847095       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.856162       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.856179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:12:03.857357       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:12:03.857367       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0918 19:12:04.848160       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:12:04.857030       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:12:04.865591       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0918 19:12:11.428741       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0918 19:12:14.330149       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0918 19:12:14.335152       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:12:15.339237       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	* 
	* ==> kube-controller-manager [cb85fb8fd00c] <==
	* W0918 19:12:06.404971       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:06.404987       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:07.765663       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:07.765681       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:09.044305       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:09.044327       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0918 19:12:09.124215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="1.875µs"
	W0918 19:12:09.340711       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:09.340733       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:12.778356       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:12.778375       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:13.814310       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:13.814362       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:15.053536       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:15.053562       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:15.339981       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:16.531417       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:16.531440       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:18.180272       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:18.180294       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:20.842470       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:20.842486       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:12:22.068003       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:12:22.068025       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0918 19:12:24.363587       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	
	* 
	* ==> kube-proxy [3b8b236037bf] <==
	* I0918 18:53:00.480730       1 server_others.go:69] "Using iptables proxy"
	I0918 18:53:00.488175       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0918 18:53:00.508601       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0918 18:53:00.508621       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 18:53:00.509405       1 server_others.go:152] "Using iptables Proxier"
	I0918 18:53:00.509431       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 18:53:00.509518       1 server.go:846] "Version info" version="v1.28.2"
	I0918 18:53:00.509524       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 18:53:00.512081       1 config.go:188] "Starting service config controller"
	I0918 18:53:00.512089       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 18:53:00.512105       1 config.go:97] "Starting endpoint slice config controller"
	I0918 18:53:00.512107       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 18:53:00.512284       1 config.go:315] "Starting node config controller"
	I0918 18:53:00.512287       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 18:53:01.015326       1 shared_informer.go:318] Caches are synced for node config
	I0918 18:53:01.015354       1 shared_informer.go:318] Caches are synced for service config
	I0918 18:53:01.015384       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [17d16f9191cb] <==
	* W0918 18:52:43.862760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:52:43.863344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 18:52:43.862808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 18:52:43.863392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 18:52:43.862823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 18:52:43.863442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 18:52:43.862835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 18:52:43.863451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0918 18:52:43.862850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:52:43.863466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:52:43.862867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 18:52:43.863530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0918 18:52:43.862878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 18:52:43.863565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0918 18:52:43.862692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 18:52:43.863575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 18:52:43.863300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 18:52:43.863661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 18:52:44.751836       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:52:44.751859       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:52:44.752492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 18:52:44.752505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0918 18:52:44.760405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 18:52:44.760417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0918 18:52:46.760908       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-18 18:52:28 UTC, ends at Mon 2023-09-18 19:12:25 UTC. --
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524125    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-run\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524134    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-host\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524142    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-bpffs\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524150    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-modules\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524157    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-debugfs\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524165    2297 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-cgroup\") pod \"7eea1399-50e6-40e6-8424-bacd2f982bff\" (UID: \"7eea1399-50e6-40e6-8424-bacd2f982bff\") "
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524209    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-cgroup" (OuterVolumeSpecName: "cgroup") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524422    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-bpffs" (OuterVolumeSpecName: "bpffs") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524436    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-run" (OuterVolumeSpecName: "run") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524443    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-host" (OuterVolumeSpecName: "host") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524450    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-modules" (OuterVolumeSpecName: "modules") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.524522    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-debugfs" (OuterVolumeSpecName: "debugfs") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.525925    2297 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eea1399-50e6-40e6-8424-bacd2f982bff-kube-api-access-pfdln" (OuterVolumeSpecName: "kube-api-access-pfdln") pod "7eea1399-50e6-40e6-8424-bacd2f982bff" (UID: "7eea1399-50e6-40e6-8424-bacd2f982bff"). InnerVolumeSpecName "kube-api-access-pfdln". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625197    2297 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-bpffs\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625210    2297 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-host\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625217    2297 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-cgroup\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625222    2297 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-modules\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625227    2297 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-debugfs\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625233    2297 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pfdln\" (UniqueName: \"kubernetes.io/projected/7eea1399-50e6-40e6-8424-bacd2f982bff-kube-api-access-pfdln\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:14 addons-221000 kubelet[2297]: I0918 19:12:14.625238    2297 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7eea1399-50e6-40e6-8424-bacd2f982bff-run\") on node \"addons-221000\" DevicePath \"\""
	Sep 18 19:12:15 addons-221000 kubelet[2297]: I0918 19:12:15.398217    2297 scope.go:117] "RemoveContainer" containerID="afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:15 addons-221000 kubelet[2297]: I0918 19:12:15.407228    2297 scope.go:117] "RemoveContainer" containerID="afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:15 addons-221000 kubelet[2297]: E0918 19:12:15.407605    2297 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a" containerID="afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:15 addons-221000 kubelet[2297]: I0918 19:12:15.407627    2297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"} err="failed to get container status \"afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a\": rpc error: code = Unknown desc = Error response from daemon: No such container: afd4040b11d6ebc91070b0c00e982de4e986fedf052239b165939eb011f4f31a"
	Sep 18 19:12:16 addons-221000 kubelet[2297]: I0918 19:12:16.788684    2297 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7eea1399-50e6-40e6-8424-bacd2f982bff" path="/var/lib/kubelet/pods/7eea1399-50e6-40e6-8424-bacd2f982bff/volumes"
	
	* 
	* ==> storage-provisioner [96b1e19d34d0] <==
	* I0918 18:53:02.494940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 18:53:02.503242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 18:53:02.503674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 18:53:02.507046       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 18:53:02.507107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9!
	I0918 18:53:02.507893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"993ff549-fac0-4a25-b8bc-6e13c7f3eb70", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9 became leader
	I0918 18:53:02.607443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-221000_d01c2a4d-820c-47b2-b527-e47d47f0ebf9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-221000 -n addons-221000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-221000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (805.10s)

                                                
                                    
x
+
TestCertOptions (9.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-770000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E0918 12:38:38.750760    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-770000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.7147185s)

                                                
                                                
-- stdout --
	* [cert-options-770000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-770000 in cluster cert-options-770000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-770000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-770000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-770000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-770000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-770000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (76.41525ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-770000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-770000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-770000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-770000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-770000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.979667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-770000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-770000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-770000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-18 12:38:48.315116 -0700 PDT m=+2824.792433334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-770000 -n cert-options-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-770000 -n cert-options-770000: exit status 7 (28.081125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-770000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-770000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-770000
--- FAIL: TestCertOptions (9.99s)
E0918 12:39:00.235656    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:40:01.826050    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:40:38.906552    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-336000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-336000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.695404083s)

                                                
                                                
-- stdout --
	* [cert-expiration-336000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-336000 in cluster cert-expiration-336000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-336000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-336000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-336000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222219292s)

                                                
                                                
-- stdout --
	* [cert-expiration-336000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-336000 in cluster cert-expiration-336000
	* Restarting existing qemu2 VM for "cert-expiration-336000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-336000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-336000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-336000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-336000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-336000 in cluster cert-expiration-336000
	* Restarting existing qemu2 VM for "cert-expiration-336000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-336000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-336000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-18 12:41:48.44217 -0700 PDT m=+3004.922863501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-336000 -n cert-expiration-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-336000 -n cert-expiration-336000: exit status 7 (64.821917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-336000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-336000
--- FAIL: TestCertExpiration (195.09s)

                                                
                                    
x
+
TestDockerFlags (10.05s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.808334042s)

                                                
                                                
-- stdout --
	* [docker-flags-564000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-564000 in cluster docker-flags-564000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-564000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:38:28.426292    4043 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:38:28.426408    4043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:28.426412    4043 out.go:309] Setting ErrFile to fd 2...
	I0918 12:38:28.426415    4043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:28.426542    4043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:38:28.427556    4043 out.go:303] Setting JSON to false
	I0918 12:38:28.442529    4043 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4082,"bootTime":1695061826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:38:28.442619    4043 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:38:28.447802    4043 out.go:177] * [docker-flags-564000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:38:28.455774    4043 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:38:28.460759    4043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:38:28.455861    4043 notify.go:220] Checking for updates...
	I0918 12:38:28.466683    4043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:38:28.469742    4043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:38:28.472710    4043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:38:28.475723    4043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:38:28.479097    4043 config.go:182] Loaded profile config "force-systemd-flag-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:38:28.479147    4043 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:38:28.483699    4043 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:38:28.502732    4043 start.go:298] selected driver: qemu2
	I0918 12:38:28.502739    4043 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:38:28.502745    4043 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:38:28.504758    4043 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:38:28.507682    4043 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:38:28.510804    4043 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0918 12:38:28.510832    4043 cni.go:84] Creating CNI manager for ""
	I0918 12:38:28.510847    4043 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:38:28.510850    4043 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:38:28.510855    4043 start_flags.go:321] config:
	{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:38:28.515030    4043 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:38:28.521714    4043 out.go:177] * Starting control plane node docker-flags-564000 in cluster docker-flags-564000
	I0918 12:38:28.525723    4043 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:38:28.525743    4043 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:38:28.525756    4043 cache.go:57] Caching tarball of preloaded images
	I0918 12:38:28.525833    4043 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:38:28.525838    4043 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:38:28.525904    4043 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/docker-flags-564000/config.json ...
	I0918 12:38:28.525917    4043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/docker-flags-564000/config.json: {Name:mkff536612056e9c0b51f844675eae806b0747ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:28.526139    4043 start.go:365] acquiring machines lock for docker-flags-564000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:28.526171    4043 start.go:369] acquired machines lock for "docker-flags-564000" in 25.333µs
	I0918 12:38:28.526186    4043 start.go:93] Provisioning new machine with config: &{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:28.526222    4043 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:28.534715    4043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:28.551387    4043 start.go:159] libmachine.API.Create for "docker-flags-564000" (driver="qemu2")
	I0918 12:38:28.551411    4043 client.go:168] LocalClient.Create starting
	I0918 12:38:28.551467    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:28.551492    4043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:28.551502    4043 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:28.551543    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:28.551562    4043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:28.551569    4043 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:28.551897    4043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:28.688633    4043 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:28.749372    4043 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:28.749379    4043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:28.749529    4043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2
	I0918 12:38:28.757969    4043 main.go:141] libmachine: STDOUT: 
	I0918 12:38:28.757984    4043 main.go:141] libmachine: STDERR: 
	I0918 12:38:28.758029    4043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2 +20000M
	I0918 12:38:28.765260    4043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:28.765273    4043 main.go:141] libmachine: STDERR: 
	I0918 12:38:28.765293    4043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2
	I0918 12:38:28.765301    4043 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:28.765331    4043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:29:d7:ca:f7:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2
	I0918 12:38:28.766841    4043 main.go:141] libmachine: STDOUT: 
	I0918 12:38:28.766854    4043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:28.766872    4043 client.go:171] LocalClient.Create took 215.4585ms
	I0918 12:38:30.769072    4043 start.go:128] duration metric: createHost completed in 2.242808458s
	I0918 12:38:30.769131    4043 start.go:83] releasing machines lock for "docker-flags-564000", held for 2.242992958s
	W0918 12:38:30.769187    4043 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:30.781095    4043 out.go:177] * Deleting "docker-flags-564000" in qemu2 ...
	W0918 12:38:30.801944    4043 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:30.801970    4043 start.go:703] Will try again in 5 seconds ...
	I0918 12:38:35.804085    4043 start.go:365] acquiring machines lock for docker-flags-564000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:35.887436    4043 start.go:369] acquired machines lock for "docker-flags-564000" in 83.273458ms
	I0918 12:38:35.887593    4043 start.go:93] Provisioning new machine with config: &{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:docker-flags-564000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:35.887804    4043 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:35.896200    4043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:35.942910    4043 start.go:159] libmachine.API.Create for "docker-flags-564000" (driver="qemu2")
	I0918 12:38:35.943236    4043 client.go:168] LocalClient.Create starting
	I0918 12:38:35.943495    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:35.943592    4043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:35.943630    4043 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:35.943739    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:35.943790    4043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:35.943808    4043 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:35.944661    4043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:36.075710    4043 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:36.146809    4043 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:36.146815    4043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:36.146950    4043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2
	I0918 12:38:36.155402    4043 main.go:141] libmachine: STDOUT: 
	I0918 12:38:36.155416    4043 main.go:141] libmachine: STDERR: 
	I0918 12:38:36.155465    4043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2 +20000M
	I0918 12:38:36.162610    4043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:36.162636    4043 main.go:141] libmachine: STDERR: 
	I0918 12:38:36.162654    4043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2
	I0918 12:38:36.162662    4043 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:36.162710    4043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:96:f3:43:87:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/docker-flags-564000/disk.qcow2
	I0918 12:38:36.164221    4043 main.go:141] libmachine: STDOUT: 
	I0918 12:38:36.164232    4043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:36.164246    4043 client.go:171] LocalClient.Create took 220.981667ms
	I0918 12:38:38.166394    4043 start.go:128] duration metric: createHost completed in 2.278601041s
	I0918 12:38:38.166478    4043 start.go:83] releasing machines lock for "docker-flags-564000", held for 2.279058375s
	W0918 12:38:38.167098    4043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:38.179700    4043 out.go:177] 
	W0918 12:38:38.185035    4043 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:38:38.185077    4043 out.go:239] * 
	* 
	W0918 12:38:38.187667    4043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:38:38.195665    4043 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (76.905333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-564000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (42.506041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-564000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-09-18 12:38:38.33077 -0700 PDT m=+2814.807900084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-564000 -n docker-flags-564000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-564000 -n docker-flags-564000: exit status 7 (27.617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-564000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-564000
--- FAIL: TestDockerFlags (10.05s)

                                                
                                    
x
+
TestForceSystemdFlag (9.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-847000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-847000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.777439292s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-847000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-847000 in cluster force-systemd-flag-847000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-847000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:38:23.570276    4019 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:38:23.570414    4019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:23.570417    4019 out.go:309] Setting ErrFile to fd 2...
	I0918 12:38:23.570420    4019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:23.570577    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:38:23.571584    4019 out.go:303] Setting JSON to false
	I0918 12:38:23.586508    4019 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4077,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:38:23.586575    4019 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:38:23.592529    4019 out.go:177] * [force-systemd-flag-847000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:38:23.599546    4019 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:38:23.603558    4019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:38:23.599617    4019 notify.go:220] Checking for updates...
	I0918 12:38:23.609420    4019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:38:23.612493    4019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:38:23.615516    4019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:38:23.618462    4019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:38:23.621892    4019 config.go:182] Loaded profile config "force-systemd-env-724000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:38:23.621942    4019 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:38:23.626560    4019 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:38:23.633456    4019 start.go:298] selected driver: qemu2
	I0918 12:38:23.633463    4019 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:38:23.633469    4019 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:38:23.635524    4019 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:38:23.638550    4019 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:38:23.639927    4019 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:38:23.639954    4019 cni.go:84] Creating CNI manager for ""
	I0918 12:38:23.639963    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:38:23.639967    4019 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:38:23.639975    4019 start_flags.go:321] config:
	{Name:force-systemd-flag-847000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-847000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:38:23.644073    4019 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:38:23.647545    4019 out.go:177] * Starting control plane node force-systemd-flag-847000 in cluster force-systemd-flag-847000
	I0918 12:38:23.655492    4019 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:38:23.655508    4019 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:38:23.655518    4019 cache.go:57] Caching tarball of preloaded images
	I0918 12:38:23.655566    4019 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:38:23.655571    4019 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:38:23.655632    4019 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/force-systemd-flag-847000/config.json ...
	I0918 12:38:23.655644    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/force-systemd-flag-847000/config.json: {Name:mk468b06b28bf0b43b6e2c1c80d7ccaba68c03d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:23.655850    4019 start.go:365] acquiring machines lock for force-systemd-flag-847000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:23.655885    4019 start.go:369] acquired machines lock for "force-systemd-flag-847000" in 23.542µs
	I0918 12:38:23.655897    4019 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-847000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-847000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:23.655933    4019 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:23.664467    4019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:23.679902    4019 start.go:159] libmachine.API.Create for "force-systemd-flag-847000" (driver="qemu2")
	I0918 12:38:23.679932    4019 client.go:168] LocalClient.Create starting
	I0918 12:38:23.679990    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:23.680016    4019 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:23.680028    4019 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:23.680070    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:23.680089    4019 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:23.680094    4019 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:23.680428    4019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:23.796673    4019 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:23.834034    4019 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:23.834040    4019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:23.834169    4019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2
	I0918 12:38:23.842801    4019 main.go:141] libmachine: STDOUT: 
	I0918 12:38:23.842813    4019 main.go:141] libmachine: STDERR: 
	I0918 12:38:23.842864    4019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2 +20000M
	I0918 12:38:23.849991    4019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:23.850009    4019 main.go:141] libmachine: STDERR: 
	I0918 12:38:23.850023    4019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2
	I0918 12:38:23.850029    4019 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:23.850063    4019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:a0:ca:1d:c5:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2
	I0918 12:38:23.851577    4019 main.go:141] libmachine: STDOUT: 
	I0918 12:38:23.851589    4019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:23.851607    4019 client.go:171] LocalClient.Create took 171.673417ms
	I0918 12:38:25.853748    4019 start.go:128] duration metric: createHost completed in 2.197839291s
	I0918 12:38:25.853818    4019 start.go:83] releasing machines lock for "force-systemd-flag-847000", held for 2.19796225s
	W0918 12:38:25.853883    4019 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:25.870857    4019 out.go:177] * Deleting "force-systemd-flag-847000" in qemu2 ...
	W0918 12:38:25.887419    4019 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:25.887443    4019 start.go:703] Will try again in 5 seconds ...
	I0918 12:38:30.889568    4019 start.go:365] acquiring machines lock for force-systemd-flag-847000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:30.889947    4019 start.go:369] acquired machines lock for "force-systemd-flag-847000" in 274.958µs
	I0918 12:38:30.890064    4019 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-847000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-847000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:30.890263    4019 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:30.899486    4019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:30.946219    4019 start.go:159] libmachine.API.Create for "force-systemd-flag-847000" (driver="qemu2")
	I0918 12:38:30.946261    4019 client.go:168] LocalClient.Create starting
	I0918 12:38:30.946380    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:30.946454    4019 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:30.946494    4019 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:30.946567    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:30.946612    4019 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:30.946634    4019 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:30.947365    4019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:31.075884    4019 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:31.261725    4019 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:31.261735    4019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:31.261887    4019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2
	I0918 12:38:31.270540    4019 main.go:141] libmachine: STDOUT: 
	I0918 12:38:31.270556    4019 main.go:141] libmachine: STDERR: 
	I0918 12:38:31.270605    4019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2 +20000M
	I0918 12:38:31.277811    4019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:31.277822    4019 main.go:141] libmachine: STDERR: 
	I0918 12:38:31.277835    4019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2
	I0918 12:38:31.277841    4019 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:31.277886    4019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:04:8c:bb:ef:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-flag-847000/disk.qcow2
	I0918 12:38:31.279378    4019 main.go:141] libmachine: STDOUT: 
	I0918 12:38:31.279388    4019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:31.279399    4019 client.go:171] LocalClient.Create took 333.139041ms
	I0918 12:38:33.281544    4019 start.go:128] duration metric: createHost completed in 2.391295666s
	I0918 12:38:33.281609    4019 start.go:83] releasing machines lock for "force-systemd-flag-847000", held for 2.391684125s
	W0918 12:38:33.281997    4019 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-847000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-847000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:33.292703    4019 out.go:177] 
	W0918 12:38:33.296798    4019 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:38:33.296892    4019 out.go:239] * 
	* 
	W0918 12:38:33.299480    4019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:38:33.308732    4019 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-847000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-847000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-847000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (72.748875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-847000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-847000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-18 12:38:33.397514 -0700 PDT m=+2809.874551959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-847000 -n force-systemd-flag-847000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-847000 -n force-systemd-flag-847000: exit status 7 (36.272375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-847000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-847000
--- FAIL: TestForceSystemdFlag (9.98s)

                                                
                                    
x
+
TestForceSystemdEnv (10.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-724000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-724000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.992353375s)

                                                
                                                
-- stdout --
	* [force-systemd-env-724000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-724000 in cluster force-systemd-env-724000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-724000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:38:18.229607    3983 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:38:18.229736    3983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:18.229739    3983 out.go:309] Setting ErrFile to fd 2...
	I0918 12:38:18.229742    3983 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:38:18.229880    3983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:38:18.230979    3983 out.go:303] Setting JSON to false
	I0918 12:38:18.246185    3983 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4072,"bootTime":1695061826,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:38:18.246282    3983 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:38:18.252086    3983 out.go:177] * [force-systemd-env-724000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:38:18.260095    3983 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:38:18.264039    3983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:38:18.260167    3983 notify.go:220] Checking for updates...
	I0918 12:38:18.270016    3983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:38:18.272997    3983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:38:18.276105    3983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:38:18.279077    3983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0918 12:38:18.282160    3983 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:38:18.286016    3983 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:38:18.293053    3983 start.go:298] selected driver: qemu2
	I0918 12:38:18.293062    3983 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:38:18.293068    3983 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:38:18.295097    3983 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:38:18.298007    3983 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:38:18.301163    3983 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:38:18.301189    3983 cni.go:84] Creating CNI manager for ""
	I0918 12:38:18.301210    3983 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:38:18.301214    3983 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:38:18.301219    3983 start_flags.go:321] config:
	{Name:force-systemd-env-724000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:38:18.305649    3983 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:38:18.313069    3983 out.go:177] * Starting control plane node force-systemd-env-724000 in cluster force-systemd-env-724000
	I0918 12:38:18.317062    3983 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:38:18.317086    3983 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:38:18.317093    3983 cache.go:57] Caching tarball of preloaded images
	I0918 12:38:18.317161    3983 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:38:18.317167    3983 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:38:18.317393    3983 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/force-systemd-env-724000/config.json ...
	I0918 12:38:18.317405    3983 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/force-systemd-env-724000/config.json: {Name:mkeac7299d2d0eac46a8751c8e3401831f90a0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:18.317624    3983 start.go:365] acquiring machines lock for force-systemd-env-724000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:18.317657    3983 start.go:369] acquired machines lock for "force-systemd-env-724000" in 24µs
	I0918 12:38:18.317668    3983 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:18.317697    3983 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:18.326065    3983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:18.342218    3983 start.go:159] libmachine.API.Create for "force-systemd-env-724000" (driver="qemu2")
	I0918 12:38:18.342263    3983 client.go:168] LocalClient.Create starting
	I0918 12:38:18.342322    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:18.342346    3983 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:18.342360    3983 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:18.342399    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:18.342418    3983 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:18.342425    3983 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:18.342747    3983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:18.455949    3983 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:18.575671    3983 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:18.575681    3983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:18.575841    3983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2
	I0918 12:38:18.584533    3983 main.go:141] libmachine: STDOUT: 
	I0918 12:38:18.584551    3983 main.go:141] libmachine: STDERR: 
	I0918 12:38:18.584613    3983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2 +20000M
	I0918 12:38:18.592004    3983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:18.592017    3983 main.go:141] libmachine: STDERR: 
	I0918 12:38:18.592032    3983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2
	I0918 12:38:18.592040    3983 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:18.592076    3983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:8b:f0:07:b9:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2
	I0918 12:38:18.593642    3983 main.go:141] libmachine: STDOUT: 
	I0918 12:38:18.593659    3983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:18.593685    3983 client.go:171] LocalClient.Create took 251.417833ms
	I0918 12:38:20.595726    3983 start.go:128] duration metric: createHost completed in 2.27806275s
	I0918 12:38:20.595750    3983 start.go:83] releasing machines lock for "force-systemd-env-724000", held for 2.278132042s
	W0918 12:38:20.595770    3983 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:20.601302    3983 out.go:177] * Deleting "force-systemd-env-724000" in qemu2 ...
	W0918 12:38:20.613131    3983 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:20.613142    3983 start.go:703] Will try again in 5 seconds ...
	I0918 12:38:25.615330    3983 start.go:365] acquiring machines lock for force-systemd-env-724000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:38:25.853980    3983 start.go:369] acquired machines lock for "force-systemd-env-724000" in 238.507083ms
	I0918 12:38:25.854099    3983 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-724000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:25.854300    3983 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:38:25.863807    3983 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 12:38:25.910255    3983 start.go:159] libmachine.API.Create for "force-systemd-env-724000" (driver="qemu2")
	I0918 12:38:25.910289    3983 client.go:168] LocalClient.Create starting
	I0918 12:38:25.910422    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:38:25.910482    3983 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:25.910500    3983 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:25.910575    3983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:38:25.910610    3983 main.go:141] libmachine: Decoding PEM data...
	I0918 12:38:25.910621    3983 main.go:141] libmachine: Parsing certificate...
	I0918 12:38:25.911080    3983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:38:26.050250    3983 main.go:141] libmachine: Creating SSH key...
	I0918 12:38:26.133064    3983 main.go:141] libmachine: Creating Disk image...
	I0918 12:38:26.133069    3983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:38:26.133209    3983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2
	I0918 12:38:26.141758    3983 main.go:141] libmachine: STDOUT: 
	I0918 12:38:26.141773    3983 main.go:141] libmachine: STDERR: 
	I0918 12:38:26.141833    3983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2 +20000M
	I0918 12:38:26.148984    3983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:38:26.148998    3983 main.go:141] libmachine: STDERR: 
	I0918 12:38:26.149011    3983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2
	I0918 12:38:26.149019    3983 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:38:26.149060    3983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:5a:60:38:de:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/force-systemd-env-724000/disk.qcow2
	I0918 12:38:26.150652    3983 main.go:141] libmachine: STDOUT: 
	I0918 12:38:26.150673    3983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:38:26.150685    3983 client.go:171] LocalClient.Create took 240.393291ms
	I0918 12:38:28.152843    3983 start.go:128] duration metric: createHost completed in 2.298549125s
	I0918 12:38:28.152930    3983 start.go:83] releasing machines lock for "force-systemd-env-724000", held for 2.298962667s
	W0918 12:38:28.153347    3983 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-724000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-724000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:38:28.164100    3983 out.go:177] 
	W0918 12:38:28.169254    3983 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:38:28.169316    3983 out.go:239] * 
	* 
	W0918 12:38:28.172075    3983 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:38:28.182160    3983 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-724000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-724000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-724000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.097416ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-724000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-724000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-18 12:38:28.275534 -0700 PDT m=+2804.752475876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-724000 -n force-systemd-env-724000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-724000 -n force-systemd-env-724000: exit status 7 (32.740792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-724000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-724000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-724000
--- FAIL: TestForceSystemdEnv (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (42.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-847000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-847000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-ctzmg" [8cc3e8ed-8be6-4416-87c0-3548bd1cbbe6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-ctzmg" [8cc3e8ed-8be6-4416-87c0-3548bd1cbbe6] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.006927459s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32307
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
2023/09/18 12:16:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1660: error fetching http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32307: Get "http://192.168.105.4:32307": dial tcp 192.168.105.4:32307: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-847000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-ctzmg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-847000/192.168.105.4
Start Time:       Mon, 18 Sep 2023 12:15:48 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://9de3fa653dead4c7773edccbd3c53ddd83b403c71318f5dec4753b80bd88dca9
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 18 Sep 2023 12:16:08 -0700
Finished:     Mon, 18 Sep 2023 12:16:08 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tgv6b (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-tgv6b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  42s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-ctzmg to functional-847000
Normal   Pulling    41s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     38s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.652s (3.652s including waiting)
Normal   Created    22s (x3 over 38s)  kubelet            Created container echoserver-arm
Normal   Started    22s (x3 over 37s)  kubelet            Started container echoserver-arm
Normal   Pulled     22s (x2 over 37s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x4 over 36s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-ctzmg_default(8cc3e8ed-8be6-4416-87c0-3548bd1cbbe6)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-847000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-847000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.30.144
IPs:                      10.102.30.144
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32307/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-847000 -n functional-847000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh -- ls                                                                                          | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh cat                                                                                            | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | /mount-9p/test-1695064572470330000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh stat                                                                                           | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh stat                                                                                           | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh sudo                                                                                           | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1700026312/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh -- ls                                                                                          | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh sudo                                                                                           | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount2    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount1    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount3    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-847000 ssh findmnt                                                                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-847000                                                                                                 | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-847000 --dry-run                                                                                       | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|           | -p functional-847000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| license   |                                                                                                                      | minikube          | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| ssh       | functional-847000 ssh sudo                                                                                           | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|           | systemctl is-active crio                                                                                             |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 12:16:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:16:18.704945    2569 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:16:18.705092    2569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:18.705095    2569 out.go:309] Setting ErrFile to fd 2...
	I0918 12:16:18.705098    2569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:18.705242    2569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:16:18.706244    2569 out.go:303] Setting JSON to false
	I0918 12:16:18.721978    2569 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2752,"bootTime":1695061826,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:16:18.722081    2569 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:16:18.725185    2569 out.go:177] * [functional-847000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:16:18.733510    2569 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:16:18.737483    2569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:16:18.733542    2569 notify.go:220] Checking for updates...
	I0918 12:16:18.744197    2569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:16:18.745685    2569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:16:18.749190    2569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:16:18.752202    2569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:16:18.755464    2569 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:16:18.755732    2569 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:16:18.760190    2569 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:16:18.767263    2569 start.go:298] selected driver: qemu2
	I0918 12:16:18.767270    2569 start.go:902] validating driver "qemu2" against &{Name:functional-847000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-847000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:16:18.767321    2569 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:16:18.769198    2569 cni.go:84] Creating CNI manager for ""
	I0918 12:16:18.769212    2569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:16:18.769219    2569 start_flags.go:321] config:
	{Name:functional-847000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-847000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:16:18.781237    2569 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-18 19:13:39 UTC, ends at Mon 2023-09-18 19:16:30 UTC. --
	Sep 18 19:16:18 functional-847000 dockerd[6288]: time="2023-09-18T19:16:18.468349722Z" level=info msg="shim disconnected" id=bc135da570b0af6add9587d560db445997346ae3dee21f532c91798b013c9592 namespace=moby
	Sep 18 19:16:18 functional-847000 dockerd[6288]: time="2023-09-18T19:16:18.468388846Z" level=warning msg="cleaning up after shim disconnected" id=bc135da570b0af6add9587d560db445997346ae3dee21f532c91798b013c9592 namespace=moby
	Sep 18 19:16:18 functional-847000 dockerd[6288]: time="2023-09-18T19:16:18.468392930Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.645211785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.645244119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.645587785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.645603451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.655433927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.655507260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.655527677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:16:19 functional-847000 dockerd[6288]: time="2023-09-18T19:16:19.655539593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:19 functional-847000 cri-dockerd[6554]: time="2023-09-18T19:16:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/281e3eb380fd9854508d0bc8275a35bc6ebec1699788fabbe4d9ba6fe06f7fff/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 18 19:16:19 functional-847000 cri-dockerd[6554]: time="2023-09-18T19:16:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/510d95a6aae7fde8cd365289bda69f0f727b4e2fdedbe28029f252f202282aa2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 18 19:16:20 functional-847000 dockerd[6282]: time="2023-09-18T19:16:20.039331800Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 18 19:16:22 functional-847000 cri-dockerd[6554]: time="2023-09-18T19:16:22Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 18 19:16:22 functional-847000 dockerd[6288]: time="2023-09-18T19:16:22.197294064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:16:22 functional-847000 dockerd[6288]: time="2023-09-18T19:16:22.197323314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:22 functional-847000 dockerd[6288]: time="2023-09-18T19:16:22.197332273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:16:22 functional-847000 dockerd[6288]: time="2023-09-18T19:16:22.197338231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:22 functional-847000 dockerd[6282]: time="2023-09-18T19:16:22.388564061Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 18 19:16:26 functional-847000 cri-dockerd[6554]: time="2023-09-18T19:16:26Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 18 19:16:26 functional-847000 dockerd[6288]: time="2023-09-18T19:16:26.748651353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:16:26 functional-847000 dockerd[6288]: time="2023-09-18T19:16:26.748687270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:16:26 functional-847000 dockerd[6288]: time="2023-09-18T19:16:26.748693145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:16:26 functional-847000 dockerd[6288]: time="2023-09-18T19:16:26.748697270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID
	4840d9fd28aeb       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         4 seconds ago        Running             kubernetes-dashboard        0                   510d95a6aae7f
	2d04daa021856       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   8 seconds ago        Running             dashboard-metrics-scraper   0                   281e3eb380fd9
	bc135da570b0a       72565bf5bbedf                                                                                          12 seconds ago       Exited              echoserver-arm              2                   84aaf3fc7a947
	6b5be5d04c686       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    16 seconds ago       Exited              mount-munger                0                   d719925ca4d12
	9de3fa653dead       72565bf5bbedf                                                                                          22 seconds ago       Exited              echoserver-arm              2                   5526a0bfe9493
	65c90fd7ca398       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153                          32 seconds ago       Running             myfrontend                  0                   fc644d8627dcb
	e0f02a896b25a       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                          48 seconds ago       Running             nginx                       0                   18e7595c39b8e
	cdd0803f0b0cb       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         1                   4fb03cbb860de
	cdac16e16be0b       97e04611ad434                                                                                          About a minute ago   Running             coredns                     2                   e6387c0a9569d
	c89a5dd67e174       7da62c127fc0f                                                                                          About a minute ago   Running             kube-proxy                  2                   3489bd34acf67
	43c1216be4bde       30bb499447fe1                                                                                          About a minute ago   Running             kube-apiserver              0                   bfafc2d037cd4
	f56043fcfb982       89d57b83c1786                                                                                          About a minute ago   Running             kube-controller-manager     2                   cb591155c26d4
	3262d7c8a5d7f       9cdd6470f48c8                                                                                          About a minute ago   Running             etcd                        2                   7eebd5bd3002a
	f67b1852a8d9e       64fc40cee3716                                                                                          About a minute ago   Running             kube-scheduler              2                   615816f6bc980
	5388c197829fe       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         0                   42c69ab27645b
	6e029fe12af83       97e04611ad434                                                                                          2 minutes ago        Exited              coredns                     1                   cb7a7275af7b0
	94ddf0564ae6d       64fc40cee3716                                                                                          2 minutes ago        Exited              kube-scheduler              1                   bfaa09371b4f2
	4c7ca1c5d88fc       7da62c127fc0f                                                                                          2 minutes ago        Exited              kube-proxy                  1                   4e318fcf7ecda
	1943ec7156082       9cdd6470f48c8                                                                                          2 minutes ago        Exited              etcd                        1                   014a9aec26cc4
	0c9d5dcc35153       89d57b83c1786                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   ad6652113e368
	
	* 
	* ==> coredns [6e029fe12af8] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35090 - 22301 "HINFO IN 8790299216817915349.1978943822594942572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.007422328s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [cdac16e16be0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55974 - 60985 "HINFO IN 4813087894308010664.2847401593487084594. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004188822s
	[INFO] 10.244.0.1:6029 - 45163 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00010891s
	[INFO] 10.244.0.1:39807 - 40062 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000067954s
	[INFO] 10.244.0.1:48535 - 64447 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000028914s
	[INFO] 10.244.0.1:20601 - 59450 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000991357s
	[INFO] 10.244.0.1:32674 - 7105 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000074079s
	[INFO] 10.244.0.1:36055 - 1271 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000126784s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-847000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-847000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=functional-847000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T12_13_56_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 19:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-847000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:16:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:16:13 +0000   Mon, 18 Sep 2023 19:13:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:16:13 +0000   Mon, 18 Sep 2023 19:13:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:16:13 +0000   Mon, 18 Sep 2023 19:13:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:16:13 +0000   Mon, 18 Sep 2023 19:14:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-847000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e7b3e9f722f4b68bf45c5b4f96a0739
	  System UUID:                0e7b3e9f722f4b68bf45c5b4f96a0739
	  Boot ID:                    22f7494a-0f91-428b-a098-43d2bb63156b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-8g5cr                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     hello-node-connect-7799dfb7c6-ctzmg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 coredns-5dd5756b68-tgwk6                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m20s
	  kube-system                 etcd-functional-847000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-apiserver-functional-847000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-functional-847000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-proxy-sb8tc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-scheduler-functional-847000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-922ns    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-fvx7h         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m19s              kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 118s               kube-proxy       
	  Normal  Starting                 2m34s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m34s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m34s              kubelet          Node functional-847000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s              kubelet          Node functional-847000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s              kubelet          Node functional-847000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m30s              kubelet          Node functional-847000 status is now: NodeReady
	  Normal  RegisteredNode           2m21s              node-controller  Node functional-847000 event: Registered Node functional-847000 in Controller
	  Normal  NodeNotReady             2m13s              kubelet          Node functional-847000 status is now: NodeNotReady
	  Normal  RegisteredNode           106s               node-controller  Node functional-847000 event: Registered Node functional-847000 in Controller
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node functional-847000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node functional-847000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node functional-847000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                node-controller  Node functional-847000 event: Registered Node functional-847000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +5.031913] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.297452] systemd-fstab-generator[4047]: Ignoring "noauto" for root device
	[  +0.083604] systemd-fstab-generator[4058]: Ignoring "noauto" for root device
	[  +0.080464] systemd-fstab-generator[4069]: Ignoring "noauto" for root device
	[  +0.084761] systemd-fstab-generator[4080]: Ignoring "noauto" for root device
	[  +0.076893] systemd-fstab-generator[4146]: Ignoring "noauto" for root device
	[  +5.097153] kauditd_printk_skb: 34 callbacks suppressed
	[ +23.043707] systemd-fstab-generator[5825]: Ignoring "noauto" for root device
	[  +0.150003] systemd-fstab-generator[5858]: Ignoring "noauto" for root device
	[  +0.097832] systemd-fstab-generator[5869]: Ignoring "noauto" for root device
	[  +0.103226] systemd-fstab-generator[5882]: Ignoring "noauto" for root device
	[Sep18 19:15] systemd-fstab-generator[6434]: Ignoring "noauto" for root device
	[  +0.085823] systemd-fstab-generator[6445]: Ignoring "noauto" for root device
	[  +0.084394] systemd-fstab-generator[6466]: Ignoring "noauto" for root device
	[  +0.090378] systemd-fstab-generator[6477]: Ignoring "noauto" for root device
	[  +0.075345] systemd-fstab-generator[6537]: Ignoring "noauto" for root device
	[  +0.959073] systemd-fstab-generator[6795]: Ignoring "noauto" for root device
	[  +4.565477] kauditd_printk_skb: 29 callbacks suppressed
	[ +26.440071] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.004772] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.431270] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.075329] kauditd_printk_skb: 6 callbacks suppressed
	[Sep18 19:16] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.468564] kauditd_printk_skb: 1 callbacks suppressed
	[ +13.444299] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [1943ec715608] <==
	* {"level":"info","ts":"2023-09-18T19:14:29.480034Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:14:31.362213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-18T19:14:31.362355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-18T19:14:31.362397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-18T19:14:31.362428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-18T19:14:31.362443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-18T19:14:31.362469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-18T19:14:31.362523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-18T19:14:31.365358Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:14:31.365846Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:14:31.365364Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-847000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-18T19:14:31.369096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-18T19:14:31.370513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T19:14:31.370751Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-18T19:14:31.371498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-18T19:14:55.651174Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-18T19:14:55.651198Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-847000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-18T19:14:55.65124Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-18T19:14:55.651273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-18T19:14:55.66043Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-18T19:14:55.66045Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-18T19:14:55.662153Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-18T19:14:55.667315Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-18T19:14:55.667368Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-18T19:14:55.667376Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-847000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [3262d7c8a5d7] <==
	* {"level":"info","ts":"2023-09-18T19:15:09.259215Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-18T19:15:09.259229Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-18T19:15:09.259324Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-18T19:15:09.259328Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-18T19:15:09.259535Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-18T19:15:09.259543Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-18T19:15:09.259547Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-18T19:15:09.259646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-18T19:15:09.259667Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-18T19:15:09.259704Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:15:09.259715Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:15:10.829503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-18T19:15:10.829636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-18T19:15:10.82971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-18T19:15:10.829744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-18T19:15:10.82976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-18T19:15:10.829785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-18T19:15:10.829807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-18T19:15:10.832211Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-847000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-18T19:15:10.832237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:15:10.832471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:15:10.834444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-18T19:15:10.834495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-18T19:15:10.834849Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T19:15:10.834875Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:16:30 up 2 min,  0 users,  load average: 0.70, 0.41, 0.16
	Linux functional-847000 5.10.57 #1 SMP PREEMPT Fri Sep 15 19:03:18 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [43c1216be4bd] <==
	* I0918 19:15:11.498165       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 19:15:11.513410       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0918 19:15:11.513436       1 aggregator.go:166] initial CRD sync complete...
	I0918 19:15:11.513441       1 autoregister_controller.go:141] Starting autoregister controller
	I0918 19:15:11.513444       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 19:15:11.513447       1 cache.go:39] Caches are synced for autoregister controller
	I0918 19:15:11.549261       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0918 19:15:11.570417       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:15:11.570417       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:15:12.401267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 19:15:12.539596       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0918 19:15:12.542612       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0918 19:15:12.553080       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0918 19:15:12.560476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 19:15:12.562589       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 19:15:24.236838       1 controller.go:624] quota admission added evaluator for: endpoints
	I0918 19:15:24.387273       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 19:15:34.139655       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.235.204"}
	I0918 19:15:39.279016       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.79.9"}
	I0918 19:15:48.665428       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0918 19:15:48.707889       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.30.144"}
	I0918 19:16:04.086904       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.94.41"}
	I0918 19:16:19.236786       1 controller.go:624] quota admission added evaluator for: namespaces
	I0918 19:16:19.312973       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.160.101"}
	I0918 19:16:19.333831       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.213.236"}
	
	* 
	* ==> kube-controller-manager [0c9d5dcc3515] <==
	* I0918 19:14:44.841562       1 shared_informer.go:318] Caches are synced for persistent volume
	I0918 19:14:44.841675       1 shared_informer.go:318] Caches are synced for GC
	I0918 19:14:44.841706       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0918 19:14:44.841952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.892µs"
	I0918 19:14:44.842373       1 shared_informer.go:318] Caches are synced for daemon sets
	I0918 19:14:44.843282       1 shared_informer.go:318] Caches are synced for HPA
	I0918 19:14:44.846434       1 shared_informer.go:318] Caches are synced for node
	I0918 19:14:44.846507       1 range_allocator.go:174] "Sending events to api server"
	I0918 19:14:44.846532       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0918 19:14:44.846550       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0918 19:14:44.846570       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0918 19:14:44.847055       1 shared_informer.go:318] Caches are synced for PVC protection
	I0918 19:14:44.848169       1 shared_informer.go:318] Caches are synced for ephemeral
	I0918 19:14:44.849279       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0918 19:14:44.849495       1 shared_informer.go:318] Caches are synced for expand
	I0918 19:14:44.899791       1 shared_informer.go:318] Caches are synced for deployment
	I0918 19:14:44.913909       1 shared_informer.go:318] Caches are synced for disruption
	I0918 19:14:44.915125       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0918 19:14:44.959948       1 shared_informer.go:318] Caches are synced for resource quota
	I0918 19:14:44.991815       1 shared_informer.go:318] Caches are synced for endpoint
	I0918 19:14:45.016461       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0918 19:14:45.026557       1 shared_informer.go:318] Caches are synced for resource quota
	I0918 19:14:45.371567       1 shared_informer.go:318] Caches are synced for garbage collector
	I0918 19:14:45.443296       1 shared_informer.go:318] Caches are synced for garbage collector
	I0918 19:14:45.443345       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [f56043fcfb98] <==
	* E0918 19:16:19.276844       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0918 19:16:19.277212       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0918 19:16:19.277245       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0918 19:16:19.280534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.261518ms"
	E0918 19:16:19.280626       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0918 19:16:19.283196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.52316ms"
	E0918 19:16:19.283276       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0918 19:16:19.283310       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0918 19:16:19.288340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.595119ms"
	E0918 19:16:19.288462       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0918 19:16:19.288619       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0918 19:16:19.300404       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-922ns"
	I0918 19:16:19.304321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="6.494608ms"
	I0918 19:16:19.314310       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-fvx7h"
	I0918 19:16:19.321022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="16.675917ms"
	I0918 19:16:19.321045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="9.208µs"
	I0918 19:16:19.321063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.453844ms"
	I0918 19:16:19.327894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.812941ms"
	I0918 19:16:19.328034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="121.542µs"
	I0918 19:16:19.332221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.75µs"
	I0918 19:16:22.983867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="3.529878ms"
	I0918 19:16:22.983898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="12.834µs"
	I0918 19:16:23.404948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.125µs"
	I0918 19:16:27.002245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="3.64135ms"
	I0918 19:16:27.002267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.792µs"
	
	* 
	* ==> kube-proxy [4c7ca1c5d88f] <==
	* I0918 19:14:29.746748       1 server_others.go:69] "Using iptables proxy"
	I0918 19:14:32.037782       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0918 19:14:32.051409       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0918 19:14:32.051421       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:14:32.052685       1 server_others.go:152] "Using iptables Proxier"
	I0918 19:14:32.052719       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 19:14:32.052805       1 server.go:846] "Version info" version="v1.28.2"
	I0918 19:14:32.052864       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:14:32.053165       1 config.go:188] "Starting service config controller"
	I0918 19:14:32.053184       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 19:14:32.053196       1 config.go:97] "Starting endpoint slice config controller"
	I0918 19:14:32.053205       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 19:14:32.053401       1 config.go:315] "Starting node config controller"
	I0918 19:14:32.053413       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 19:14:32.153628       1 shared_informer.go:318] Caches are synced for node config
	I0918 19:14:32.153673       1 shared_informer.go:318] Caches are synced for service config
	I0918 19:14:32.153688       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [c89a5dd67e17] <==
	* I0918 19:15:12.969026       1 server_others.go:69] "Using iptables proxy"
	I0918 19:15:12.973211       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0918 19:15:12.981282       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0918 19:15:12.981294       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:15:12.981941       1 server_others.go:152] "Using iptables Proxier"
	I0918 19:15:12.981984       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 19:15:12.982069       1 server.go:846] "Version info" version="v1.28.2"
	I0918 19:15:12.982080       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:15:12.982427       1 config.go:188] "Starting service config controller"
	I0918 19:15:12.982438       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 19:15:12.982482       1 config.go:97] "Starting endpoint slice config controller"
	I0918 19:15:12.982487       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 19:15:12.983193       1 config.go:315] "Starting node config controller"
	I0918 19:15:12.983197       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 19:15:13.082751       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0918 19:15:13.082773       1 shared_informer.go:318] Caches are synced for service config
	I0918 19:15:13.083287       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [94ddf0564ae6] <==
	* I0918 19:14:29.961365       1 serving.go:348] Generated self-signed cert in-memory
	W0918 19:14:31.994863       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 19:14:31.994878       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 19:14:31.994883       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 19:14:31.994886       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 19:14:32.023179       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0918 19:14:32.023626       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:14:32.025637       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0918 19:14:32.025788       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 19:14:32.025852       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:14:32.026712       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0918 19:14:32.126964       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:14:55.656804       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0918 19:14:55.656826       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0918 19:14:55.656917       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f67b1852a8d9] <==
	* I0918 19:15:11.471066       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0918 19:15:11.471852       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0918 19:15:11.471864       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:15:11.471871       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0918 19:15:11.475302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:15:11.475322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 19:15:11.475411       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:15:11.475421       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 19:15:11.475486       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:15:11.475494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 19:15:11.475538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:15:11.475546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0918 19:15:11.475583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:15:11.475592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0918 19:15:11.475639       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:15:11.475647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0918 19:15:11.478247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:15:11.478259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 19:15:11.478310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:15:11.478317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 19:15:11.478558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:15:11.478602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0918 19:15:11.478686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:15:11.478715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0918 19:15:12.872217       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-18 19:13:39 UTC, ends at Mon 2023-09-18 19:16:31 UTC. --
	Sep 18 19:16:13 functional-847000 kubelet[6801]: I0918 19:16:13.117692    6801 topology_manager.go:215] "Topology Admit Handler" podUID="ebf89895-9ac9-4a77-ad15-d68c1c62a5c2" podNamespace="default" podName="busybox-mount"
	Sep 18 19:16:13 functional-847000 kubelet[6801]: I0918 19:16:13.239334    6801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdcw\" (UniqueName: \"kubernetes.io/projected/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-kube-api-access-zhdcw\") pod \"busybox-mount\" (UID: \"ebf89895-9ac9-4a77-ad15-d68c1c62a5c2\") " pod="default/busybox-mount"
	Sep 18 19:16:13 functional-847000 kubelet[6801]: I0918 19:16:13.239390    6801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-test-volume\") pod \"busybox-mount\" (UID: \"ebf89895-9ac9-4a77-ad15-d68c1c62a5c2\") " pod="default/busybox-mount"
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.157655    6801 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-test-volume\") pod \"ebf89895-9ac9-4a77-ad15-d68c1c62a5c2\" (UID: \"ebf89895-9ac9-4a77-ad15-d68c1c62a5c2\") "
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.157697    6801 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhdcw\" (UniqueName: \"kubernetes.io/projected/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-kube-api-access-zhdcw\") pod \"ebf89895-9ac9-4a77-ad15-d68c1c62a5c2\" (UID: \"ebf89895-9ac9-4a77-ad15-d68c1c62a5c2\") "
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.157794    6801 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-test-volume" (OuterVolumeSpecName: "test-volume") pod "ebf89895-9ac9-4a77-ad15-d68c1c62a5c2" (UID: "ebf89895-9ac9-4a77-ad15-d68c1c62a5c2"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.161238    6801 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-kube-api-access-zhdcw" (OuterVolumeSpecName: "kube-api-access-zhdcw") pod "ebf89895-9ac9-4a77-ad15-d68c1c62a5c2" (UID: "ebf89895-9ac9-4a77-ad15-d68c1c62a5c2"). InnerVolumeSpecName "kube-api-access-zhdcw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.258101    6801 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zhdcw\" (UniqueName: \"kubernetes.io/projected/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-kube-api-access-zhdcw\") on node \"functional-847000\" DevicePath \"\""
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.258127    6801 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ebf89895-9ac9-4a77-ad15-d68c1c62a5c2-test-volume\") on node \"functional-847000\" DevicePath \"\""
	Sep 18 19:16:16 functional-847000 kubelet[6801]: I0918 19:16:16.934016    6801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d719925ca4d1215e8e4dba4fd9c42dd5d4f50f6ea0e88c697f6c8e6bf819170e"
	Sep 18 19:16:18 functional-847000 kubelet[6801]: I0918 19:16:18.398146    6801 scope.go:117] "RemoveContainer" containerID="4da4c99c7fd674aa14c1e28ab025d1240e6d23aa9a84e5bf15f2c40a0b660ff8"
	Sep 18 19:16:18 functional-847000 kubelet[6801]: I0918 19:16:18.946932    6801 scope.go:117] "RemoveContainer" containerID="4da4c99c7fd674aa14c1e28ab025d1240e6d23aa9a84e5bf15f2c40a0b660ff8"
	Sep 18 19:16:18 functional-847000 kubelet[6801]: I0918 19:16:18.947106    6801 scope.go:117] "RemoveContainer" containerID="bc135da570b0af6add9587d560db445997346ae3dee21f532c91798b013c9592"
	Sep 18 19:16:18 functional-847000 kubelet[6801]: E0918 19:16:18.947332    6801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-8g5cr_default(751e17c6-7b8b-4bf6-bb0d-97f745627ff3)\"" pod="default/hello-node-759d89bdcc-8g5cr" podUID="751e17c6-7b8b-4bf6-bb0d-97f745627ff3"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.302720    6801 topology_manager.go:215] "Topology Admit Handler" podUID="c04d1d8a-74c2-4b78-9540-f0b80e57ea5c" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-922ns"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: E0918 19:16:19.302754    6801 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ebf89895-9ac9-4a77-ad15-d68c1c62a5c2" containerName="mount-munger"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.302773    6801 memory_manager.go:346] "RemoveStaleState removing state" podUID="ebf89895-9ac9-4a77-ad15-d68c1c62a5c2" containerName="mount-munger"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.317717    6801 topology_manager.go:215] "Topology Admit Handler" podUID="333556e9-a33c-4d7a-a9d5-c7a61c11f2cf" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-fvx7h"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.473260    6801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/333556e9-a33c-4d7a-a9d5-c7a61c11f2cf-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-fvx7h\" (UID: \"333556e9-a33c-4d7a-a9d5-c7a61c11f2cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fvx7h"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.473488    6801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46fl\" (UniqueName: \"kubernetes.io/projected/c04d1d8a-74c2-4b78-9540-f0b80e57ea5c-kube-api-access-d46fl\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-922ns\" (UID: \"c04d1d8a-74c2-4b78-9540-f0b80e57ea5c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-922ns"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.473507    6801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnkd8\" (UniqueName: \"kubernetes.io/projected/333556e9-a33c-4d7a-a9d5-c7a61c11f2cf-kube-api-access-nnkd8\") pod \"kubernetes-dashboard-8694d4445c-fvx7h\" (UID: \"333556e9-a33c-4d7a-a9d5-c7a61c11f2cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-fvx7h"
	Sep 18 19:16:19 functional-847000 kubelet[6801]: I0918 19:16:19.473518    6801 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c04d1d8a-74c2-4b78-9540-f0b80e57ea5c-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-922ns\" (UID: \"c04d1d8a-74c2-4b78-9540-f0b80e57ea5c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-922ns"
	Sep 18 19:16:22 functional-847000 kubelet[6801]: I0918 19:16:22.980270    6801 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-922ns" podStartSLOduration=1.636377989 podCreationTimestamp="2023-09-18 19:16:19 +0000 UTC" firstStartedPulling="2023-09-18 19:16:19.809756169 +0000 UTC m=+71.488512821" lastFinishedPulling="2023-09-18 19:16:22.153622995 +0000 UTC m=+73.832379688" observedRunningTime="2023-09-18 19:16:22.980092773 +0000 UTC m=+74.658849467" watchObservedRunningTime="2023-09-18 19:16:22.980244856 +0000 UTC m=+74.659001550"
	Sep 18 19:16:23 functional-847000 kubelet[6801]: I0918 19:16:23.398433    6801 scope.go:117] "RemoveContainer" containerID="9de3fa653dead4c7773edccbd3c53ddd83b403c71318f5dec4753b80bd88dca9"
	Sep 18 19:16:23 functional-847000 kubelet[6801]: E0918 19:16:23.398546    6801 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-ctzmg_default(8cc3e8ed-8be6-4416-87c0-3548bd1cbbe6)\"" pod="default/hello-node-connect-7799dfb7c6-ctzmg" podUID="8cc3e8ed-8be6-4416-87c0-3548bd1cbbe6"
	
	* 
	* ==> kubernetes-dashboard [4840d9fd28ae] <==
	* 2023/09/18 19:16:26 Using namespace: kubernetes-dashboard
	2023/09/18 19:16:26 Using in-cluster config to connect to apiserver
	2023/09/18 19:16:26 Using secret token for csrf signing
	2023/09/18 19:16:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/09/18 19:16:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/09/18 19:16:26 Successful initial request to the apiserver, version: v1.28.2
	2023/09/18 19:16:26 Generating JWE encryption key
	2023/09/18 19:16:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/18 19:16:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/18 19:16:26 Initializing JWE encryption key from synchronized object
	2023/09/18 19:16:26 Creating in-cluster Sidecar client
	2023/09/18 19:16:26 Successful request to sidecar
	2023/09/18 19:16:26 Serving insecurely on HTTP port: 9090
	2023/09/18 19:16:26 Starting overwatch
	
	* 
	* ==> storage-provisioner [5388c197829f] <==
	* I0918 19:14:36.205039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:14:36.209325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:14:36.209341       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:14:36.212653       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:14:36.212760       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-847000_9f2ffb87-fca2-4a54-9f0e-bbedd64a022c!
	I0918 19:14:36.212982       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48555537-faa4-450e-9382-be08f24094ee", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-847000_9f2ffb87-fca2-4a54-9f0e-bbedd64a022c became leader
	I0918 19:14:36.312990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-847000_9f2ffb87-fca2-4a54-9f0e-bbedd64a022c!
	
	* 
	* ==> storage-provisioner [cdd0803f0b0c] <==
	* I0918 19:15:12.950236       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:15:12.956497       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:15:12.956586       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:15:30.342345       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:15:30.342438       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-847000_132c0f0e-bd4c-49ea-aaa6-3323a3935e5b!
	I0918 19:15:30.342849       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48555537-faa4-450e-9382-be08f24094ee", APIVersion:"v1", ResourceVersion:"558", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-847000_132c0f0e-bd4c-49ea-aaa6-3323a3935e5b became leader
	I0918 19:15:30.443295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-847000_132c0f0e-bd4c-49ea-aaa6-3323a3935e5b!
	I0918 19:15:44.099278       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0918 19:15:44.099358       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d032ad6f-165d-4083-8a88-fed37e50afe1 346 0 2023-09-18 19:14:10 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-18 19:14:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-928b0ae6-6aab-4910-ba6d-2f276dad7fcb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  928b0ae6-6aab-4910-ba6d-2f276dad7fcb 623 0 2023-09-18 19:15:44 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-18 19:15:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-18 19:15:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0918 19:15:44.100007       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-928b0ae6-6aab-4910-ba6d-2f276dad7fcb" provisioned
	I0918 19:15:44.100228       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0918 19:15:44.100264       1 volume_store.go:212] Trying to save persistentvolume "pvc-928b0ae6-6aab-4910-ba6d-2f276dad7fcb"
	I0918 19:15:44.100898       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"928b0ae6-6aab-4910-ba6d-2f276dad7fcb", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0918 19:15:44.106613       1 volume_store.go:219] persistentvolume "pvc-928b0ae6-6aab-4910-ba6d-2f276dad7fcb" saved
	I0918 19:15:44.106780       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"928b0ae6-6aab-4910-ba6d-2f276dad7fcb", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-928b0ae6-6aab-4910-ba6d-2f276dad7fcb
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-847000 -n functional-847000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-847000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-847000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-847000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-847000/192.168.105.4
	Start Time:       Mon, 18 Sep 2023 12:16:13 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://6b5be5d04c6864dfb8d865d0ab0170f5c83b9d1f8f2e563210b87bf66219f30e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 18 Sep 2023 12:16:14 -0700
	      Finished:     Mon, 18 Sep 2023 12:16:14 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhdcw (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zhdcw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  18s   default-scheduler  Successfully assigned default/busybox-mount to functional-847000
	  Normal  Pulling    18s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     17s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.052s (1.052s including waiting)
	  Normal  Created    17s   kubelet            Created container mount-munger
	  Normal  Started    17s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (42.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-438000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-438000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 3d847d91d804
	Removing intermediate container 3d847d91d804
	 ---> cec5f152cdf4
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in c8ca0e1b6638
	Removing intermediate container c8ca0e1b6638
	 ---> 1a3849f84aaa
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in a6a0272fed0e
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-438000 -n image-438000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-438000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-847000 image ls                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image          | functional-847000 image load --daemon                    | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-847000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image ls                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image          | functional-847000 image load --daemon                    | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-847000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image ls                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image          | functional-847000 image save                             | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-847000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image rm                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-847000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image ls                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image          | functional-847000 image load                             | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image ls                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image          | functional-847000 image save --daemon                    | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-847000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-847000 ssh pgrep                              | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image build -t                         | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | localhost/my-image:functional-847000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-847000                                        | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-847000 image ls                               | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| delete         | -p functional-847000                                     | functional-847000 | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| start          | -p image-438000 --driver=qemu2                           | image-438000      | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:17 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-438000      | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-438000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-438000      | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-438000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 12:16:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:16:42.389494    2777 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:16:42.389618    2777 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:42.389619    2777 out.go:309] Setting ErrFile to fd 2...
	I0918 12:16:42.389621    2777 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:42.389736    2777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:16:42.390782    2777 out.go:303] Setting JSON to false
	I0918 12:16:42.406947    2777 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2776,"bootTime":1695061826,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:16:42.407001    2777 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:16:42.411291    2777 out.go:177] * [image-438000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:16:42.418288    2777 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:16:42.418339    2777 notify.go:220] Checking for updates...
	I0918 12:16:42.422314    2777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:16:42.423272    2777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:16:42.426241    2777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:16:42.429253    2777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:16:42.432281    2777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:16:42.435494    2777 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:16:42.439258    2777 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:16:42.446186    2777 start.go:298] selected driver: qemu2
	I0918 12:16:42.446190    2777 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:16:42.446195    2777 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:16:42.446246    2777 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:16:42.449233    2777 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:16:42.454519    2777 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0918 12:16:42.454618    2777 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:16:42.454634    2777 cni.go:84] Creating CNI manager for ""
	I0918 12:16:42.454646    2777 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:16:42.454654    2777 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:16:42.454659    2777 start_flags.go:321] config:
	{Name:image-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:image-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:16:42.459047    2777 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:16:42.463292    2777 out.go:177] * Starting control plane node image-438000 in cluster image-438000
	I0918 12:16:42.471237    2777 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:16:42.471263    2777 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:16:42.471271    2777 cache.go:57] Caching tarball of preloaded images
	I0918 12:16:42.471324    2777 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:16:42.471328    2777 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:16:42.471512    2777 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/config.json ...
	I0918 12:16:42.471523    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/config.json: {Name:mk97d7d776b3a89c6280fcfd709e3983c8c42e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:16:42.471697    2777 start.go:365] acquiring machines lock for image-438000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:16:42.471725    2777 start.go:369] acquired machines lock for "image-438000" in 24.917µs
	I0918 12:16:42.471734    2777 start.go:93] Provisioning new machine with config: &{Name:image-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:image-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:16:42.471758    2777 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:16:42.490219    2777 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 12:16:42.512727    2777 start.go:159] libmachine.API.Create for "image-438000" (driver="qemu2")
	I0918 12:16:42.512747    2777 client.go:168] LocalClient.Create starting
	I0918 12:16:42.512813    2777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:16:42.512836    2777 main.go:141] libmachine: Decoding PEM data...
	I0918 12:16:42.512844    2777 main.go:141] libmachine: Parsing certificate...
	I0918 12:16:42.512878    2777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:16:42.512894    2777 main.go:141] libmachine: Decoding PEM data...
	I0918 12:16:42.512900    2777 main.go:141] libmachine: Parsing certificate...
	I0918 12:16:42.513191    2777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:16:42.654104    2777 main.go:141] libmachine: Creating SSH key...
	I0918 12:16:42.725079    2777 main.go:141] libmachine: Creating Disk image...
	I0918 12:16:42.725083    2777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:16:42.725223    2777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/disk.qcow2
	I0918 12:16:42.746898    2777 main.go:141] libmachine: STDOUT: 
	I0918 12:16:42.746909    2777 main.go:141] libmachine: STDERR: 
	I0918 12:16:42.746963    2777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/disk.qcow2 +20000M
	I0918 12:16:42.754107    2777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:16:42.754126    2777 main.go:141] libmachine: STDERR: 
	I0918 12:16:42.754144    2777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/disk.qcow2
	I0918 12:16:42.754151    2777 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:16:42.754184    2777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:c6:2b:22:62:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/disk.qcow2
	I0918 12:16:42.806234    2777 main.go:141] libmachine: STDOUT: 
	I0918 12:16:42.806252    2777 main.go:141] libmachine: STDERR: 
	I0918 12:16:42.806256    2777 main.go:141] libmachine: Attempt 0
	I0918 12:16:42.806270    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:42.806336    2777 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0918 12:16:42.806358    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:16:42.806362    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:16:42.806366    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:16:44.808559    2777 main.go:141] libmachine: Attempt 1
	I0918 12:16:44.808608    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:44.808892    2777 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0918 12:16:44.808935    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:16:44.808963    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:16:44.809020    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:16:46.811166    2777 main.go:141] libmachine: Attempt 2
	I0918 12:16:46.811181    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:46.811381    2777 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0918 12:16:46.811419    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:16:46.811429    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:16:46.811439    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:16:48.813482    2777 main.go:141] libmachine: Attempt 3
	I0918 12:16:48.813486    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:48.813533    2777 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0918 12:16:48.813539    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:16:48.813543    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:16:48.813547    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:16:50.815553    2777 main.go:141] libmachine: Attempt 4
	I0918 12:16:50.815556    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:50.815582    2777 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0918 12:16:50.815587    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:16:50.815591    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:16:50.815595    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:16:52.816038    2777 main.go:141] libmachine: Attempt 5
	I0918 12:16:52.816050    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:52.816137    2777 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0918 12:16:52.816145    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:16:52.816149    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:16:52.816153    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:16:54.818234    2777 main.go:141] libmachine: Attempt 6
	I0918 12:16:54.818250    2777 main.go:141] libmachine: Searching for 86:c6:2b:22:62:a1 in /var/db/dhcpd_leases ...
	I0918 12:16:54.818343    2777 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:16:54.818356    2777 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:16:54.818360    2777 main.go:141] libmachine: Found match: 86:c6:2b:22:62:a1
	I0918 12:16:54.818369    2777 main.go:141] libmachine: IP: 192.168.105.5
	I0918 12:16:54.818374    2777 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0918 12:16:55.824111    2777 machine.go:88] provisioning docker machine ...
	I0918 12:16:55.824124    2777 buildroot.go:166] provisioning hostname "image-438000"
	I0918 12:16:55.824164    2777 main.go:141] libmachine: Using SSH client type: native
	I0918 12:16:55.824419    2777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100820760] 0x100822ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0918 12:16:55.824423    2777 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-438000 && echo "image-438000" | sudo tee /etc/hostname
	I0918 12:16:55.904568    2777 main.go:141] libmachine: SSH cmd err, output: <nil>: image-438000
	
	I0918 12:16:55.904621    2777 main.go:141] libmachine: Using SSH client type: native
	I0918 12:16:55.904868    2777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100820760] 0x100822ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0918 12:16:55.904875    2777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-438000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-438000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-438000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 12:16:55.979378    2777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 12:16:55.979386    2777 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17263-1251/.minikube CaCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17263-1251/.minikube}
	I0918 12:16:55.979392    2777 buildroot.go:174] setting up certificates
	I0918 12:16:55.979398    2777 provision.go:83] configureAuth start
	I0918 12:16:55.979401    2777 provision.go:138] copyHostCerts
	I0918 12:16:55.979476    2777 exec_runner.go:144] found /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem, removing ...
	I0918 12:16:55.979480    2777 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem
	I0918 12:16:55.979597    2777 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem (1082 bytes)
	I0918 12:16:55.979799    2777 exec_runner.go:144] found /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem, removing ...
	I0918 12:16:55.979801    2777 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem
	I0918 12:16:55.979847    2777 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem (1123 bytes)
	I0918 12:16:55.979945    2777 exec_runner.go:144] found /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem, removing ...
	I0918 12:16:55.979947    2777 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem
	I0918 12:16:55.979994    2777 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem (1679 bytes)
	I0918 12:16:55.980088    2777 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem org=jenkins.image-438000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-438000]
	I0918 12:16:56.071897    2777 provision.go:172] copyRemoteCerts
	I0918 12:16:56.071921    2777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 12:16:56.071926    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/id_rsa Username:docker}
	I0918 12:16:56.111841    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 12:16:56.119004    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 12:16:56.125794    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 12:16:56.132420    2777 provision.go:86] duration metric: configureAuth took 153.015916ms
	I0918 12:16:56.132425    2777 buildroot.go:189] setting minikube options for container-runtime
	I0918 12:16:56.132525    2777 config.go:182] Loaded profile config "image-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:16:56.132563    2777 main.go:141] libmachine: Using SSH client type: native
	I0918 12:16:56.132771    2777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100820760] 0x100822ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0918 12:16:56.132774    2777 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 12:16:56.207807    2777 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 12:16:56.207812    2777 buildroot.go:70] root file system type: tmpfs
	I0918 12:16:56.207866    2777 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 12:16:56.207931    2777 main.go:141] libmachine: Using SSH client type: native
	I0918 12:16:56.208198    2777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100820760] 0x100822ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0918 12:16:56.208233    2777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 12:16:56.289157    2777 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 12:16:56.289205    2777 main.go:141] libmachine: Using SSH client type: native
	I0918 12:16:56.289486    2777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100820760] 0x100822ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0918 12:16:56.289495    2777 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 12:16:56.617108    2777 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 12:16:56.617117    2777 machine.go:91] provisioned docker machine in 793.007583ms
	I0918 12:16:56.617121    2777 client.go:171] LocalClient.Create took 14.10451825s
	I0918 12:16:56.617133    2777 start.go:167] duration metric: libmachine.API.Create for "image-438000" took 14.104565083s
	I0918 12:16:56.617140    2777 start.go:300] post-start starting for "image-438000" (driver="qemu2")
	I0918 12:16:56.617144    2777 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 12:16:56.617229    2777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 12:16:56.617237    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/id_rsa Username:docker}
	I0918 12:16:56.655936    2777 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 12:16:56.657321    2777 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 12:16:56.657326    2777 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/addons for local assets ...
	I0918 12:16:56.657391    2777 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/files for local assets ...
	I0918 12:16:56.657488    2777 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem -> 16682.pem in /etc/ssl/certs
	I0918 12:16:56.657597    2777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 12:16:56.660322    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem --> /etc/ssl/certs/16682.pem (1708 bytes)
	I0918 12:16:56.667753    2777 start.go:303] post-start completed in 50.602666ms
	I0918 12:16:56.668106    2777 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/config.json ...
	I0918 12:16:56.668260    2777 start.go:128] duration metric: createHost completed in 14.196646625s
	I0918 12:16:56.668294    2777 main.go:141] libmachine: Using SSH client type: native
	I0918 12:16:56.668512    2777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100820760] 0x100822ed0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0918 12:16:56.668515    2777 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0918 12:16:56.742155    2777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695064616.543014793
	
	I0918 12:16:56.742159    2777 fix.go:206] guest clock: 1695064616.543014793
	I0918 12:16:56.742163    2777 fix.go:219] Guest: 2023-09-18 12:16:56.543014793 -0700 PDT Remote: 2023-09-18 12:16:56.668261 -0700 PDT m=+14.298587835 (delta=-125.246207ms)
	I0918 12:16:56.742172    2777 fix.go:190] guest clock delta is within tolerance: -125.246207ms
	I0918 12:16:56.742174    2777 start.go:83] releasing machines lock for "image-438000", held for 14.270593958s
	I0918 12:16:56.742462    2777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 12:16:56.742462    2777 ssh_runner.go:195] Run: cat /version.json
	I0918 12:16:56.742468    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/id_rsa Username:docker}
	I0918 12:16:56.742479    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/id_rsa Username:docker}
	I0918 12:16:56.826265    2777 ssh_runner.go:195] Run: systemctl --version
	I0918 12:16:56.828523    2777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 12:16:56.830780    2777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 12:16:56.830812    2777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 12:16:56.836363    2777 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 12:16:56.836368    2777 start.go:469] detecting cgroup driver to use...
	I0918 12:16:56.836435    2777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 12:16:56.842590    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0918 12:16:56.845979    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 12:16:56.849182    2777 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 12:16:56.849200    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 12:16:56.852307    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 12:16:56.855600    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 12:16:56.859074    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 12:16:56.862191    2777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 12:16:56.865003    2777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 12:16:56.868278    2777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 12:16:56.871576    2777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 12:16:56.874741    2777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:16:56.934308    2777 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 12:16:56.942745    2777 start.go:469] detecting cgroup driver to use...
	I0918 12:16:56.942802    2777 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 12:16:56.950513    2777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 12:16:56.955181    2777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 12:16:56.962774    2777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 12:16:56.967366    2777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 12:16:56.972011    2777 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 12:16:57.008219    2777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 12:16:57.013543    2777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 12:16:57.018876    2777 ssh_runner.go:195] Run: which cri-dockerd
	I0918 12:16:57.020097    2777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 12:16:57.022786    2777 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 12:16:57.028033    2777 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 12:16:57.094029    2777 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 12:16:57.154562    2777 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 12:16:57.154579    2777 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0918 12:16:57.159814    2777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:16:57.216793    2777 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 12:16:58.383612    2777 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.166818792s)
	I0918 12:16:58.383673    2777 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 12:16:58.444379    2777 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 12:16:58.506169    2777 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 12:16:58.566311    2777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:16:58.627692    2777 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 12:16:58.635469    2777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:16:58.699681    2777 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0918 12:16:58.723197    2777 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 12:16:58.723271    2777 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 12:16:58.725503    2777 start.go:537] Will wait 60s for crictl version
	I0918 12:16:58.725540    2777 ssh_runner.go:195] Run: which crictl
	I0918 12:16:58.726909    2777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 12:16:58.742110    2777 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0918 12:16:58.742173    2777 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 12:16:58.751883    2777 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 12:16:58.763129    2777 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0918 12:16:58.763202    2777 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0918 12:16:58.764554    2777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 12:16:58.768260    2777 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:16:58.768301    2777 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 12:16:58.773415    2777 docker.go:636] Got preloaded images: 
	I0918 12:16:58.773419    2777 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0918 12:16:58.773453    2777 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 12:16:58.776471    2777 ssh_runner.go:195] Run: which lz4
	I0918 12:16:58.778035    2777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0918 12:16:58.779316    2777 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 12:16:58.779325    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356993689 bytes)
	I0918 12:17:00.108721    2777 docker.go:600] Took 1.330732 seconds to copy over tarball
	I0918 12:17:00.108770    2777 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 12:17:01.122008    2777 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.013224833s)
	I0918 12:17:01.122023    2777 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 12:17:01.137485    2777 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 12:17:01.140664    2777 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0918 12:17:01.145934    2777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:17:01.211417    2777 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 12:17:02.755354    2777 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.543939166s)
	I0918 12:17:02.755437    2777 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 12:17:02.761716    2777 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 12:17:02.761725    2777 cache_images.go:84] Images are preloaded, skipping loading
	I0918 12:17:02.761800    2777 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 12:17:02.772524    2777 cni.go:84] Creating CNI manager for ""
	I0918 12:17:02.772534    2777 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:17:02.772541    2777 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 12:17:02.772549    2777 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-438000 NodeName:image-438000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 12:17:02.772619    2777 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-438000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 12:17:02.772656    2777 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-438000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:image-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 12:17:02.772709    2777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 12:17:02.776155    2777 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 12:17:02.776179    2777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 12:17:02.779405    2777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0918 12:17:02.784646    2777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 12:17:02.789904    2777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0918 12:17:02.794763    2777 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0918 12:17:02.796052    2777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 12:17:02.800084    2777 certs.go:56] Setting up /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000 for IP: 192.168.105.5
	I0918 12:17:02.800091    2777 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac81ee65979b8c4f5ece6243c3a0190531689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:02.800224    2777 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key
	I0918 12:17:02.800268    2777 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key
	I0918 12:17:02.800291    2777 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/client.key
	I0918 12:17:02.800298    2777 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/client.crt with IP's: []
	I0918 12:17:02.903595    2777 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/client.crt ...
	I0918 12:17:02.903601    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/client.crt: {Name:mk142f03069fc218854f90a3783cbf80eaa79196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:02.903857    2777 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/client.key ...
	I0918 12:17:02.903859    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/client.key: {Name:mk03e2cd3e41a4b62e7d8da6e4b98e1208fb3701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:02.903962    2777 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.key.e69b33ca
	I0918 12:17:02.903970    2777 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 12:17:02.940725    2777 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.crt.e69b33ca ...
	I0918 12:17:02.940727    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.crt.e69b33ca: {Name:mk9ca6a9c22b344015776609d2e05ea1b0cf4938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:02.940873    2777 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.key.e69b33ca ...
	I0918 12:17:02.940876    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.key.e69b33ca: {Name:mkdf5dd48cc36d16afa3dfc0dc716a932a01c5ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:02.940981    2777 certs.go:337] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.crt
	I0918 12:17:02.941067    2777 certs.go:341] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.key
	I0918 12:17:02.941148    2777 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.key
	I0918 12:17:02.941155    2777 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.crt with IP's: []
	I0918 12:17:03.059910    2777 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.crt ...
	I0918 12:17:03.059915    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.crt: {Name:mkfa338318b5b18faada0e424b1758376f761ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:03.060135    2777 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.key ...
	I0918 12:17:03.060138    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.key: {Name:mkbbfc94decd1624e5f3c29601dd6e201020225a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:03.060372    2777 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668.pem (1338 bytes)
	W0918 12:17:03.060399    2777 certs.go:433] ignoring /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668_empty.pem, impossibly tiny 0 bytes
	I0918 12:17:03.060404    2777 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 12:17:03.060422    2777 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem (1082 bytes)
	I0918 12:17:03.060441    2777 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem (1123 bytes)
	I0918 12:17:03.060457    2777 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem (1679 bytes)
	I0918 12:17:03.060492    2777 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem (1708 bytes)
	I0918 12:17:03.060784    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 12:17:03.068420    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 12:17:03.075877    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 12:17:03.083111    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/image-438000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 12:17:03.090222    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 12:17:03.096943    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 12:17:03.104062    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 12:17:03.111655    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 12:17:03.119147    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 12:17:03.126418    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668.pem --> /usr/share/ca-certificates/1668.pem (1338 bytes)
	I0918 12:17:03.133107    2777 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem --> /usr/share/ca-certificates/16682.pem (1708 bytes)
	I0918 12:17:03.139969    2777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 12:17:03.145126    2777 ssh_runner.go:195] Run: openssl version
	I0918 12:17:03.147130    2777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 12:17:03.150079    2777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:03.151576    2777 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:03.151596    2777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:03.153593    2777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 12:17:03.156557    2777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1668.pem && ln -fs /usr/share/ca-certificates/1668.pem /etc/ssl/certs/1668.pem"
	I0918 12:17:03.159880    2777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1668.pem
	I0918 12:17:03.161417    2777 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:13 /usr/share/ca-certificates/1668.pem
	I0918 12:17:03.161436    2777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1668.pem
	I0918 12:17:03.163317    2777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1668.pem /etc/ssl/certs/51391683.0"
	I0918 12:17:03.166404    2777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16682.pem && ln -fs /usr/share/ca-certificates/16682.pem /etc/ssl/certs/16682.pem"
	I0918 12:17:03.169339    2777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16682.pem
	I0918 12:17:03.170822    2777 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:13 /usr/share/ca-certificates/16682.pem
	I0918 12:17:03.170841    2777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16682.pem
	I0918 12:17:03.172679    2777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16682.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 12:17:03.176035    2777 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 12:17:03.177449    2777 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 12:17:03.177475    2777 kubeadm.go:404] StartCluster: {Name:image-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:image-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:17:03.177534    2777 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 12:17:03.182896    2777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 12:17:03.186005    2777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 12:17:03.188694    2777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 12:17:03.192002    2777 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 12:17:03.192013    2777 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 12:17:03.215594    2777 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 12:17:03.215615    2777 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 12:17:03.274022    2777 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 12:17:03.274080    2777 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 12:17:03.274123    2777 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 12:17:03.333691    2777 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 12:17:03.342894    2777 out.go:204]   - Generating certificates and keys ...
	I0918 12:17:03.342937    2777 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 12:17:03.342968    2777 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 12:17:03.414666    2777 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 12:17:03.548242    2777 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 12:17:03.685234    2777 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 12:17:03.835433    2777 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 12:17:03.885433    2777 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 12:17:03.885490    2777 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-438000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0918 12:17:03.941032    2777 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 12:17:03.941102    2777 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-438000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0918 12:17:04.068600    2777 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 12:17:04.107092    2777 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 12:17:04.235762    2777 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 12:17:04.235795    2777 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 12:17:04.326236    2777 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 12:17:04.415832    2777 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 12:17:04.503126    2777 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 12:17:04.679114    2777 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 12:17:04.679333    2777 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 12:17:04.681195    2777 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 12:17:04.689409    2777 out.go:204]   - Booting up control plane ...
	I0918 12:17:04.689532    2777 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 12:17:04.689580    2777 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 12:17:04.689606    2777 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 12:17:04.689658    2777 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 12:17:04.689694    2777 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 12:17:04.689711    2777 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 12:17:04.763428    2777 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 12:17:08.266972    2777 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.503804 seconds
	I0918 12:17:08.267039    2777 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 12:17:08.272091    2777 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 12:17:08.783138    2777 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 12:17:08.783290    2777 kubeadm.go:322] [mark-control-plane] Marking the node image-438000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 12:17:09.288325    2777 kubeadm.go:322] [bootstrap-token] Using token: d56bv3.xfz54degysdiooht
	I0918 12:17:09.294708    2777 out.go:204]   - Configuring RBAC rules ...
	I0918 12:17:09.294768    2777 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 12:17:09.296238    2777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 12:17:09.303786    2777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 12:17:09.304910    2777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 12:17:09.306108    2777 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 12:17:09.307244    2777 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 12:17:09.311601    2777 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 12:17:09.470842    2777 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 12:17:09.698860    2777 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 12:17:09.699255    2777 kubeadm.go:322] 
	I0918 12:17:09.699279    2777 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 12:17:09.699281    2777 kubeadm.go:322] 
	I0918 12:17:09.699316    2777 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 12:17:09.699317    2777 kubeadm.go:322] 
	I0918 12:17:09.699332    2777 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 12:17:09.699366    2777 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 12:17:09.699391    2777 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 12:17:09.699393    2777 kubeadm.go:322] 
	I0918 12:17:09.699417    2777 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 12:17:09.699418    2777 kubeadm.go:322] 
	I0918 12:17:09.699438    2777 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 12:17:09.699439    2777 kubeadm.go:322] 
	I0918 12:17:09.699463    2777 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 12:17:09.699499    2777 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 12:17:09.699529    2777 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 12:17:09.699531    2777 kubeadm.go:322] 
	I0918 12:17:09.699569    2777 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 12:17:09.699599    2777 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 12:17:09.699601    2777 kubeadm.go:322] 
	I0918 12:17:09.699640    2777 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token d56bv3.xfz54degysdiooht \
	I0918 12:17:09.699693    2777 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 \
	I0918 12:17:09.699703    2777 kubeadm.go:322] 	--control-plane 
	I0918 12:17:09.699704    2777 kubeadm.go:322] 
	I0918 12:17:09.699744    2777 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 12:17:09.699746    2777 kubeadm.go:322] 
	I0918 12:17:09.699788    2777 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token d56bv3.xfz54degysdiooht \
	I0918 12:17:09.699835    2777 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 
	I0918 12:17:09.699966    2777 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 12:17:09.699972    2777 cni.go:84] Creating CNI manager for ""
	I0918 12:17:09.699980    2777 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:17:09.705002    2777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 12:17:09.708099    2777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 12:17:09.711431    2777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0918 12:17:09.716259    2777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 12:17:09.716297    2777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=image-438000 minikube.k8s.io/updated_at=2023_09_18T12_17_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:17:09.716298    2777 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:17:09.791675    2777 kubeadm.go:1081] duration metric: took 75.408583ms to wait for elevateKubeSystemPrivileges.
	I0918 12:17:09.791684    2777 ops.go:34] apiserver oom_adj: -16
	I0918 12:17:09.791688    2777 kubeadm.go:406] StartCluster complete in 6.614282375s
	I0918 12:17:09.791698    2777 settings.go:142] acquiring lock: {Name:mke420f28dda4f7a752738b3e6d571dc4216779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:09.791777    2777 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:17:09.792145    2777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/kubeconfig: {Name:mk07020c5b974cf07ca0cda25f72a521eb245fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:09.792317    2777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 12:17:09.792362    2777 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0918 12:17:09.792396    2777 addons.go:69] Setting storage-provisioner=true in profile "image-438000"
	I0918 12:17:09.792400    2777 addons.go:69] Setting default-storageclass=true in profile "image-438000"
	I0918 12:17:09.792401    2777 addons.go:231] Setting addon storage-provisioner=true in "image-438000"
	I0918 12:17:09.792406    2777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-438000"
	I0918 12:17:09.792420    2777 host.go:66] Checking if "image-438000" exists ...
	I0918 12:17:09.792576    2777 config.go:182] Loaded profile config "image-438000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:17:09.798018    2777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:17:09.802047    2777 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 12:17:09.802052    2777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 12:17:09.802060    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/id_rsa Username:docker}
	I0918 12:17:09.806025    2777 addons.go:231] Setting addon default-storageclass=true in "image-438000"
	I0918 12:17:09.806039    2777 host.go:66] Checking if "image-438000" exists ...
	I0918 12:17:09.806752    2777 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 12:17:09.806756    2777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 12:17:09.806761    2777 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/image-438000/id_rsa Username:docker}
	I0918 12:17:09.809693    2777 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-438000" context rescaled to 1 replicas
	I0918 12:17:09.809706    2777 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:17:09.816959    2777 out.go:177] * Verifying Kubernetes components...
	I0918 12:17:09.820034    2777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 12:17:09.847513    2777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 12:17:09.853866    2777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 12:17:09.855861    2777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 12:17:09.856038    2777 api_server.go:52] waiting for apiserver process to appear ...
	I0918 12:17:09.856062    2777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 12:17:10.358533    2777 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0918 12:17:10.358541    2777 api_server.go:72] duration metric: took 548.829959ms to wait for apiserver process to appear ...
	I0918 12:17:10.364332    2777 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0918 12:17:10.358545    2777 api_server.go:88] waiting for apiserver healthz status ...
	I0918 12:17:10.368359    2777 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0918 12:17:10.368381    2777 addons.go:502] enable addons completed in 576.038916ms: enabled=[storage-provisioner default-storageclass]
	I0918 12:17:10.371793    2777 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0918 12:17:10.372493    2777 api_server.go:141] control plane version: v1.28.2
	I0918 12:17:10.372496    2777 api_server.go:131] duration metric: took 4.145959ms to wait for apiserver health ...
	I0918 12:17:10.372499    2777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 12:17:10.374999    2777 system_pods.go:59] 5 kube-system pods found
	I0918 12:17:10.375005    2777 system_pods.go:61] "etcd-image-438000" [abf4c201-4256-40e1-92aa-fcc7f7739cc8] Pending
	I0918 12:17:10.375007    2777 system_pods.go:61] "kube-apiserver-image-438000" [909da55d-9501-4bb0-ad21-694c92c9cb2e] Pending
	I0918 12:17:10.375009    2777 system_pods.go:61] "kube-controller-manager-image-438000" [dcad0e8f-797f-4cb1-971b-a710af56fcd7] Pending
	I0918 12:17:10.375011    2777 system_pods.go:61] "kube-scheduler-image-438000" [4ba6e54b-00d8-494e-afbf-72244630f478] Pending
	I0918 12:17:10.375012    2777 system_pods.go:61] "storage-provisioner" [5f3e6e9a-7709-4e2f-bc4a-954becbeb730] Pending
	I0918 12:17:10.375013    2777 system_pods.go:74] duration metric: took 2.512667ms to wait for pod list to return data ...
	I0918 12:17:10.375016    2777 kubeadm.go:581] duration metric: took 565.30725ms to wait for : map[apiserver:true system_pods:true] ...
	I0918 12:17:10.375020    2777 node_conditions.go:102] verifying NodePressure condition ...
	I0918 12:17:10.376371    2777 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0918 12:17:10.376376    2777 node_conditions.go:123] node cpu capacity is 2
	I0918 12:17:10.376381    2777 node_conditions.go:105] duration metric: took 1.358958ms to run NodePressure ...
	I0918 12:17:10.376385    2777 start.go:228] waiting for startup goroutines ...
	I0918 12:17:10.376387    2777 start.go:233] waiting for cluster config update ...
	I0918 12:17:10.376391    2777 start.go:242] writing updated cluster config ...
	I0918 12:17:10.376620    2777 ssh_runner.go:195] Run: rm -f paused
	I0918 12:17:10.404035    2777 start.go:600] kubectl: 1.27.2, cluster: 1.28.2 (minor skew: 1)
	I0918 12:17:10.408298    2777 out.go:177] * Done! kubectl is now configured to use "image-438000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-18 19:16:53 UTC, ends at Mon 2023-09-18 19:17:11 UTC. --
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.506294214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.509459547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.509587339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.509621506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.509643506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.513056839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.513097631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.513109172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.513115547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.572777547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.572833214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.572850464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:17:05 image-438000 dockerd[1107]: time="2023-09-18T19:17:05.572862422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:11 image-438000 dockerd[1101]: time="2023-09-18T19:17:11.278169758Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 18 19:17:11 image-438000 dockerd[1101]: time="2023-09-18T19:17:11.406799175Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 18 19:17:11 image-438000 dockerd[1101]: time="2023-09-18T19:17:11.422048925Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.462978842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.463012925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.463325800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.463372467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:17:11 image-438000 dockerd[1101]: time="2023-09-18T19:17:11.597920050Z" level=info msg="ignoring event" container=a6a0272fed0e37de65910732f43e22c40a7630cf08867a9129a1237acf0816af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.597978217Z" level=info msg="shim disconnected" id=a6a0272fed0e37de65910732f43e22c40a7630cf08867a9129a1237acf0816af namespace=moby
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.598004092Z" level=warning msg="cleaning up after shim disconnected" id=a6a0272fed0e37de65910732f43e22c40a7630cf08867a9129a1237acf0816af namespace=moby
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.598008009Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:17:11 image-438000 dockerd[1107]: time="2023-09-18T19:17:11.605344509Z" level=warning msg="cleanup warnings time=\"2023-09-18T19:17:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ab71429bd8195       9cdd6470f48c8       6 seconds ago       Running             etcd                      0                   900bb9d69314e
	4325f96e9eab0       64fc40cee3716       6 seconds ago       Running             kube-scheduler            0                   bbdd1cef0bf46
	996710bd53049       89d57b83c1786       6 seconds ago       Running             kube-controller-manager   0                   9f51e4e0f5ba6
	f498fa724b4d9       30bb499447fe1       6 seconds ago       Running             kube-apiserver            0                   785ef9b260df8
	
	* 
	* ==> describe nodes <==
	* Name:               image-438000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-438000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=image-438000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T12_17_09_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 19:17:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-438000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:17:09 +0000   Mon, 18 Sep 2023 19:17:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:17:09 +0000   Mon, 18 Sep 2023 19:17:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:17:09 +0000   Mon, 18 Sep 2023 19:17:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 18 Sep 2023 19:17:09 +0000   Mon, 18 Sep 2023 19:17:06 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-438000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 588ea76bfe9647f2a635df94e1a36725
	  System UUID:                588ea76bfe9647f2a635df94e1a36725
	  Boot ID:                    f44f265d-c40b-4b99-b9b7-6cafcb8d10ca
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-438000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-438000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-438000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-438000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-438000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-438000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-438000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep18 19:16] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.643461] EINJ: EINJ table not found.
	[  +0.527717] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043487] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000796] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.151019] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.059597] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.447077] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.160652] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +0.059300] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.060951] systemd-fstab-generator[728]: Ignoring "noauto" for root device
	[  +1.147898] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.076231] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +0.066091] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +0.059203] systemd-fstab-generator[938]: Ignoring "noauto" for root device
	[  +0.061356] systemd-fstab-generator[949]: Ignoring "noauto" for root device
	[  +0.070383] systemd-fstab-generator[985]: Ignoring "noauto" for root device
	[Sep18 19:17] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[  +3.540816] systemd-fstab-generator[1430]: Ignoring "noauto" for root device
	[  +0.238814] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.384565] systemd-fstab-generator[2310]: Ignoring "noauto" for root device
	[  +2.302002] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [ab71429bd819] <==
	* {"level":"info","ts":"2023-09-18T19:17:05.77162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-18T19:17:05.77177Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-18T19:17:05.772569Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-18T19:17:05.774852Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-18T19:17:05.775513Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-18T19:17:05.775785Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-18T19:17:05.775863Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-18T19:17:05.958644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-18T19:17:05.958755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-18T19:17:05.958792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-18T19:17:05.958819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-18T19:17:05.95885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-18T19:17:05.958873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-18T19:17:05.958896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-18T19:17:05.959593Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-438000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-18T19:17:05.959631Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:17:05.960094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-18T19:17:05.960166Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:17:05.960489Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:17:05.96083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-18T19:17:05.960966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:17:05.96318Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:17:05.963204Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:17:05.963101Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T19:17:05.963219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:17:12 up 0 min,  0 users,  load average: 0.65, 0.15, 0.05
	Linux image-438000 5.10.57 #1 SMP PREEMPT Fri Sep 15 19:03:18 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f498fa724b4d] <==
	* I0918 19:17:06.773175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0918 19:17:06.773297       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0918 19:17:06.773305       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0918 19:17:06.773340       1 shared_informer.go:318] Caches are synced for configmaps
	I0918 19:17:06.774094       1 controller.go:624] quota admission added evaluator for: namespaces
	I0918 19:17:06.776839       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 19:17:06.788576       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:17:06.795037       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0918 19:17:06.795119       1 aggregator.go:166] initial CRD sync complete...
	I0918 19:17:06.795143       1 autoregister_controller.go:141] Starting autoregister controller
	I0918 19:17:06.795164       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 19:17:06.795180       1 cache.go:39] Caches are synced for autoregister controller
	I0918 19:17:07.677566       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0918 19:17:07.679025       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0918 19:17:07.679033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 19:17:07.817857       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 19:17:07.828677       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 19:17:07.883049       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0918 19:17:07.885782       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0918 19:17:07.886178       1 controller.go:624] quota admission added evaluator for: endpoints
	I0918 19:17:07.887589       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 19:17:08.704841       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0918 19:17:09.267436       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0918 19:17:09.271301       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0918 19:17:09.274916       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [996710bd5304] <==
	* I0918 19:17:08.958480       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0918 19:17:08.958512       1 namespace_controller.go:197] "Starting namespace controller"
	I0918 19:17:08.958519       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0918 19:17:09.104562       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0918 19:17:09.104606       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0918 19:17:09.104610       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0918 19:17:09.403842       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0918 19:17:09.403875       1 horizontal.go:200] "Starting HPA controller"
	I0918 19:17:09.403883       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0918 19:17:09.554189       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0918 19:17:09.554217       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0918 19:17:09.554221       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0918 19:17:09.554225       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0918 19:17:09.704630       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0918 19:17:09.704666       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0918 19:17:09.704671       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0918 19:17:09.855142       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0918 19:17:09.855206       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0918 19:17:09.855214       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0918 19:17:10.104056       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0918 19:17:10.104758       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0918 19:17:10.104766       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0918 19:17:10.104827       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0918 19:17:10.154146       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0918 19:17:10.154184       1 cleaner.go:83] "Starting CSR cleaner controller"
	
	* 
	* ==> kube-scheduler [4325f96e9eab] <==
	* W0918 19:17:06.740001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:17:06.740690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 19:17:06.740166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:17:06.740731       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 19:17:06.740182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:17:06.740791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 19:17:06.740194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:17:06.740947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0918 19:17:06.740030       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:06.741006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 19:17:06.739862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0918 19:17:06.739987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:17:06.741103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:17:06.741095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0918 19:17:07.551364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:07.551381       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 19:17:07.576789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:07.576806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 19:17:07.604508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:07.604517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0918 19:17:07.696599       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:17:07.696615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 19:17:07.716903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:17:07.717013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0918 19:17:08.425313       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-18 19:16:53 UTC, ends at Mon 2023-09-18 19:17:12 UTC. --
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.356321    2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.426812    2329 kubelet_node_status.go:70] "Attempting to register node" node="image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.430247    2329 topology_manager.go:215] "Topology Admit Handler" podUID="23a1f2e5d3b4ea2a46557aaadb138e83" podNamespace="kube-system" podName="etcd-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.430306    2329 topology_manager.go:215] "Topology Admit Handler" podUID="2d403184dbd531fd9dade79127282944" podNamespace="kube-system" podName="kube-apiserver-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.430326    2329 topology_manager.go:215] "Topology Admit Handler" podUID="a09e2858012c56ffbeb865d4589112e9" podNamespace="kube-system" podName="kube-controller-manager-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.430338    2329 topology_manager.go:215] "Topology Admit Handler" podUID="3e7747dc3cfd98c20fdc8d35328f31d7" podNamespace="kube-system" podName="kube-scheduler-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.430790    2329 kubelet_node_status.go:108] "Node was previously registered" node="image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.430821    2329 kubelet_node_status.go:73] "Successfully registered node" node="image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.624995    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a09e2858012c56ffbeb865d4589112e9-k8s-certs\") pod \"kube-controller-manager-image-438000\" (UID: \"a09e2858012c56ffbeb865d4589112e9\") " pod="kube-system/kube-controller-manager-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625015    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e7747dc3cfd98c20fdc8d35328f31d7-kubeconfig\") pod \"kube-scheduler-image-438000\" (UID: \"3e7747dc3cfd98c20fdc8d35328f31d7\") " pod="kube-system/kube-scheduler-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625026    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/23a1f2e5d3b4ea2a46557aaadb138e83-etcd-data\") pod \"etcd-image-438000\" (UID: \"23a1f2e5d3b4ea2a46557aaadb138e83\") " pod="kube-system/etcd-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625034    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d403184dbd531fd9dade79127282944-ca-certs\") pod \"kube-apiserver-image-438000\" (UID: \"2d403184dbd531fd9dade79127282944\") " pod="kube-system/kube-apiserver-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625048    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d403184dbd531fd9dade79127282944-usr-share-ca-certificates\") pod \"kube-apiserver-image-438000\" (UID: \"2d403184dbd531fd9dade79127282944\") " pod="kube-system/kube-apiserver-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625057    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a09e2858012c56ffbeb865d4589112e9-ca-certs\") pod \"kube-controller-manager-image-438000\" (UID: \"a09e2858012c56ffbeb865d4589112e9\") " pod="kube-system/kube-controller-manager-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625066    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a09e2858012c56ffbeb865d4589112e9-flexvolume-dir\") pod \"kube-controller-manager-image-438000\" (UID: \"a09e2858012c56ffbeb865d4589112e9\") " pod="kube-system/kube-controller-manager-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625076    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a09e2858012c56ffbeb865d4589112e9-kubeconfig\") pod \"kube-controller-manager-image-438000\" (UID: \"a09e2858012c56ffbeb865d4589112e9\") " pod="kube-system/kube-controller-manager-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625087    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a09e2858012c56ffbeb865d4589112e9-usr-share-ca-certificates\") pod \"kube-controller-manager-image-438000\" (UID: \"a09e2858012c56ffbeb865d4589112e9\") " pod="kube-system/kube-controller-manager-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625096    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/23a1f2e5d3b4ea2a46557aaadb138e83-etcd-certs\") pod \"etcd-image-438000\" (UID: \"23a1f2e5d3b4ea2a46557aaadb138e83\") " pod="kube-system/etcd-image-438000"
	Sep 18 19:17:09 image-438000 kubelet[2329]: I0918 19:17:09.625105    2329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d403184dbd531fd9dade79127282944-k8s-certs\") pod \"kube-apiserver-image-438000\" (UID: \"2d403184dbd531fd9dade79127282944\") " pod="kube-system/kube-apiserver-image-438000"
	Sep 18 19:17:10 image-438000 kubelet[2329]: I0918 19:17:10.308130    2329 apiserver.go:52] "Watching apiserver"
	Sep 18 19:17:10 image-438000 kubelet[2329]: I0918 19:17:10.324767    2329 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 18 19:17:10 image-438000 kubelet[2329]: I0918 19:17:10.359764    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-438000" podStartSLOduration=1.359732008 podCreationTimestamp="2023-09-18 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:17:10.35481005 +0000 UTC m=+1.096183210" watchObservedRunningTime="2023-09-18 19:17:10.359732008 +0000 UTC m=+1.101105127"
	Sep 18 19:17:10 image-438000 kubelet[2329]: I0918 19:17:10.363914    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-438000" podStartSLOduration=1.36389205 podCreationTimestamp="2023-09-18 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:17:10.359897425 +0000 UTC m=+1.101270544" watchObservedRunningTime="2023-09-18 19:17:10.36389205 +0000 UTC m=+1.105265210"
	Sep 18 19:17:10 image-438000 kubelet[2329]: I0918 19:17:10.363942    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-438000" podStartSLOduration=1.363935758 podCreationTimestamp="2023-09-18 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:17:10.3639278 +0000 UTC m=+1.105300960" watchObservedRunningTime="2023-09-18 19:17:10.363935758 +0000 UTC m=+1.105308919"
	Sep 18 19:17:10 image-438000 kubelet[2329]: I0918 19:17:10.367419    2329 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-438000" podStartSLOduration=1.367404841 podCreationTimestamp="2023-09-18 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:17:10.367225966 +0000 UTC m=+1.108599127" watchObservedRunningTime="2023-09-18 19:17:10.367404841 +0000 UTC m=+1.108778002"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-438000 -n image-438000
helpers_test.go:261: (dbg) Run:  kubectl --context image-438000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-438000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-438000 describe pod storage-provisioner: exit status 1 (37.505708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-438000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-356000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-356000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.504169958s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-356000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-356000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [030d2ee0-55dc-4ef8-bafa-87c5c200a90a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [030d2ee0-55dc-4ef8-bafa-87c5c200a90a] Running
E0918 12:19:00.289277    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.013765917s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-356000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.029859s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons disable ingress-dns --alsologtostderr -v=1: (11.00254875s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons disable ingress --alsologtostderr -v=1
E0918 12:19:27.995618    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons disable ingress --alsologtostderr -v=1: (7.067802375s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-356000 -n ingress-addon-legacy-356000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-847000 image ls                               | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image   | functional-847000 image load                             | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-847000 image ls                               | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| image   | functional-847000 image save --daemon                    | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-847000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-847000                                        | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-847000                                        | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-847000 ssh pgrep                              | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-847000                                        | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-847000 image build -t                         | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|         | localhost/my-image:functional-847000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-847000                                        | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-847000 image ls                               | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| delete  | -p functional-847000                                     | functional-847000           | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:16 PDT |
	| start   | -p image-438000 --driver=qemu2                           | image-438000                | jenkins | v1.31.2 | 18 Sep 23 12:16 PDT | 18 Sep 23 12:17 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-438000                | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-438000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-438000                | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-438000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-438000                | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-438000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-438000                | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-438000                                          |                             |         |         |                     |                     |
	| delete  | -p image-438000                                          | image-438000                | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:17 PDT |
	| start   | -p ingress-addon-legacy-356000                           | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:17 PDT | 18 Sep 23 12:18 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-356000                              | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:18 PDT | 18 Sep 23 12:18 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-356000                              | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:18 PDT | 18 Sep 23 12:18 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-356000                              | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:19 PDT | 18 Sep 23 12:19 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-356000 ip                           | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:19 PDT | 18 Sep 23 12:19 PDT |
	| addons  | ingress-addon-legacy-356000                              | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:19 PDT | 18 Sep 23 12:19 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-356000                              | ingress-addon-legacy-356000 | jenkins | v1.31.2 | 18 Sep 23 12:19 PDT | 18 Sep 23 12:19 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 12:17:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:17:12.969562    2818 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:17:12.969701    2818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:17:12.969704    2818 out.go:309] Setting ErrFile to fd 2...
	I0918 12:17:12.969706    2818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:17:12.969852    2818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:17:12.970893    2818 out.go:303] Setting JSON to false
	I0918 12:17:12.986115    2818 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2806,"bootTime":1695061826,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:17:12.986210    2818 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:17:12.989814    2818 out.go:177] * [ingress-addon-legacy-356000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:17:12.996871    2818 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:17:13.000819    2818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:17:12.996908    2818 notify.go:220] Checking for updates...
	I0918 12:17:13.004795    2818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:17:13.007866    2818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:17:13.010856    2818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:17:13.013860    2818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:17:13.017067    2818 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:17:13.020847    2818 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:17:13.027843    2818 start.go:298] selected driver: qemu2
	I0918 12:17:13.027849    2818 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:17:13.027855    2818 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:17:13.029854    2818 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:17:13.032787    2818 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:17:13.035994    2818 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:17:13.036023    2818 cni.go:84] Creating CNI manager for ""
	I0918 12:17:13.036034    2818 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:17:13.036040    2818 start_flags.go:321] config:
	{Name:ingress-addon-legacy-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:17:13.040215    2818 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:17:13.047833    2818 out.go:177] * Starting control plane node ingress-addon-legacy-356000 in cluster ingress-addon-legacy-356000
	I0918 12:17:13.051819    2818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0918 12:17:13.135930    2818 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0918 12:17:13.135943    2818 cache.go:57] Caching tarball of preloaded images
	I0918 12:17:13.136207    2818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0918 12:17:13.140909    2818 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0918 12:17:13.148820    2818 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:17:13.234240    2818 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0918 12:17:21.089615    2818 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:17:21.089768    2818 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:17:21.842633    2818 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0918 12:17:21.842827    2818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/config.json ...
	I0918 12:17:21.842849    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/config.json: {Name:mka3a971ff56456017742e3c8983b7164e0a1130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:21.843087    2818 start.go:365] acquiring machines lock for ingress-addon-legacy-356000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:17:21.843111    2818 start.go:369] acquired machines lock for "ingress-addon-legacy-356000" in 19.542µs
	I0918 12:17:21.843121    2818 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:17:21.843155    2818 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:17:21.850114    2818 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0918 12:17:21.864580    2818 start.go:159] libmachine.API.Create for "ingress-addon-legacy-356000" (driver="qemu2")
	I0918 12:17:21.864622    2818 client.go:168] LocalClient.Create starting
	I0918 12:17:21.864710    2818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:17:21.864742    2818 main.go:141] libmachine: Decoding PEM data...
	I0918 12:17:21.864757    2818 main.go:141] libmachine: Parsing certificate...
	I0918 12:17:21.864793    2818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:17:21.864812    2818 main.go:141] libmachine: Decoding PEM data...
	I0918 12:17:21.864822    2818 main.go:141] libmachine: Parsing certificate...
	I0918 12:17:21.865155    2818 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:17:21.990890    2818 main.go:141] libmachine: Creating SSH key...
	I0918 12:17:22.121699    2818 main.go:141] libmachine: Creating Disk image...
	I0918 12:17:22.121706    2818 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:17:22.121844    2818 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/disk.qcow2
	I0918 12:17:22.130402    2818 main.go:141] libmachine: STDOUT: 
	I0918 12:17:22.130418    2818 main.go:141] libmachine: STDERR: 
	I0918 12:17:22.130470    2818 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/disk.qcow2 +20000M
	I0918 12:17:22.137567    2818 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:17:22.137580    2818 main.go:141] libmachine: STDERR: 
	I0918 12:17:22.137598    2818 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/disk.qcow2
	I0918 12:17:22.137607    2818 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:17:22.137649    2818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5c:c8:81:db:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/disk.qcow2
	I0918 12:17:22.171882    2818 main.go:141] libmachine: STDOUT: 
	I0918 12:17:22.171924    2818 main.go:141] libmachine: STDERR: 
	I0918 12:17:22.171931    2818 main.go:141] libmachine: Attempt 0
	I0918 12:17:22.171948    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:22.172008    2818 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:17:22.172030    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:17:22.172037    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:17:22.172042    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:17:22.172048    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:17:24.174190    2818 main.go:141] libmachine: Attempt 1
	I0918 12:17:24.174267    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:24.174629    2818 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:17:24.174677    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:17:24.174746    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:17:24.174780    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:17:24.174815    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:17:26.175300    2818 main.go:141] libmachine: Attempt 2
	I0918 12:17:26.175388    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:26.175521    2818 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:17:26.175535    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:17:26.175541    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:17:26.175547    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:17:26.175552    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:17:28.177585    2818 main.go:141] libmachine: Attempt 3
	I0918 12:17:28.177593    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:28.177628    2818 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:17:28.177634    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:17:28.177640    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:17:28.177646    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:17:28.177651    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:17:30.179663    2818 main.go:141] libmachine: Attempt 4
	I0918 12:17:30.179670    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:30.179701    2818 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:17:30.179706    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:17:30.179712    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:17:30.179730    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:17:30.179735    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:17:32.181774    2818 main.go:141] libmachine: Attempt 5
	I0918 12:17:32.181800    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:32.181906    2818 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0918 12:17:32.181920    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:86:c6:2b:22:62:a1 ID:1,86:c6:2b:22:62:a1 Lease:0x6509f3a5}
	I0918 12:17:32.181926    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:72:e4:5a:e6:f2:f3 ID:1,72:e4:5a:e6:f2:f3 Lease:0x6509f2e3}
	I0918 12:17:32.181938    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7a:5a:5:1b:96:65 ID:1,7a:5a:5:1b:96:65 Lease:0x6508a157}
	I0918 12:17:32.181943    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ce:ae:e8:a:fd:16 ID:1,ce:ae:e8:a:fd:16 Lease:0x6508a134}
	I0918 12:17:34.184041    2818 main.go:141] libmachine: Attempt 6
	I0918 12:17:34.184091    2818 main.go:141] libmachine: Searching for 76:5c:c8:81:db:14 in /var/db/dhcpd_leases ...
	I0918 12:17:34.184231    2818 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0918 12:17:34.184243    2818 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:76:5c:c8:81:db:14 ID:1,76:5c:c8:81:db:14 Lease:0x6509f3cd}
	I0918 12:17:34.184248    2818 main.go:141] libmachine: Found match: 76:5c:c8:81:db:14
	I0918 12:17:34.184263    2818 main.go:141] libmachine: IP: 192.168.105.6
	I0918 12:17:34.184268    2818 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0918 12:17:36.204659    2818 machine.go:88] provisioning docker machine ...
	I0918 12:17:36.204714    2818 buildroot.go:166] provisioning hostname "ingress-addon-legacy-356000"
	I0918 12:17:36.204897    2818 main.go:141] libmachine: Using SSH client type: native
	I0918 12:17:36.205785    2818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031f4760] 0x1031f6ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0918 12:17:36.205810    2818 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-356000 && echo "ingress-addon-legacy-356000" | sudo tee /etc/hostname
	I0918 12:17:36.300927    2818 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-356000
	
	I0918 12:17:36.301055    2818 main.go:141] libmachine: Using SSH client type: native
	I0918 12:17:36.301562    2818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031f4760] 0x1031f6ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0918 12:17:36.301581    2818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-356000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-356000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-356000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 12:17:36.380393    2818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 12:17:36.380415    2818 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17263-1251/.minikube CaCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17263-1251/.minikube}
	I0918 12:17:36.380429    2818 buildroot.go:174] setting up certificates
	I0918 12:17:36.380442    2818 provision.go:83] configureAuth start
	I0918 12:17:36.380451    2818 provision.go:138] copyHostCerts
	I0918 12:17:36.380504    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem
	I0918 12:17:36.380579    2818 exec_runner.go:144] found /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem, removing ...
	I0918 12:17:36.380588    2818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem
	I0918 12:17:36.380825    2818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.pem (1082 bytes)
	I0918 12:17:36.381138    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem
	I0918 12:17:36.381183    2818 exec_runner.go:144] found /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem, removing ...
	I0918 12:17:36.381187    2818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem
	I0918 12:17:36.381263    2818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/cert.pem (1123 bytes)
	I0918 12:17:36.381377    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem
	I0918 12:17:36.381416    2818 exec_runner.go:144] found /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem, removing ...
	I0918 12:17:36.381424    2818 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem
	I0918 12:17:36.381492    2818 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17263-1251/.minikube/key.pem (1679 bytes)
	I0918 12:17:36.381623    2818 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-356000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-356000]
	I0918 12:17:36.547067    2818 provision.go:172] copyRemoteCerts
	I0918 12:17:36.547111    2818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 12:17:36.547120    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/id_rsa Username:docker}
	I0918 12:17:36.581341    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 12:17:36.581393    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 12:17:36.588932    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 12:17:36.589000    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 12:17:36.595902    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 12:17:36.595949    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0918 12:17:36.602587    2818 provision.go:86] duration metric: configureAuth took 222.136958ms
	I0918 12:17:36.602595    2818 buildroot.go:189] setting minikube options for container-runtime
	I0918 12:17:36.602703    2818 config.go:182] Loaded profile config "ingress-addon-legacy-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0918 12:17:36.602744    2818 main.go:141] libmachine: Using SSH client type: native
	I0918 12:17:36.602962    2818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031f4760] 0x1031f6ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0918 12:17:36.602967    2818 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 12:17:36.663911    2818 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 12:17:36.663918    2818 buildroot.go:70] root file system type: tmpfs
	I0918 12:17:36.663971    2818 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 12:17:36.664019    2818 main.go:141] libmachine: Using SSH client type: native
	I0918 12:17:36.664253    2818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031f4760] 0x1031f6ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0918 12:17:36.664296    2818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 12:17:36.730498    2818 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 12:17:36.730547    2818 main.go:141] libmachine: Using SSH client type: native
	I0918 12:17:36.730792    2818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031f4760] 0x1031f6ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0918 12:17:36.730801    2818 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 12:17:37.061130    2818 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 12:17:37.061142    2818 machine.go:91] provisioned docker machine in 856.465417ms
	I0918 12:17:37.061148    2818 client.go:171] LocalClient.Create took 15.1966775s
	I0918 12:17:37.061159    2818 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-356000" took 15.1967415s
	I0918 12:17:37.061163    2818 start.go:300] post-start starting for "ingress-addon-legacy-356000" (driver="qemu2")
	I0918 12:17:37.061168    2818 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 12:17:37.061246    2818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 12:17:37.061269    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/id_rsa Username:docker}
	I0918 12:17:37.093189    2818 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 12:17:37.094416    2818 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 12:17:37.094424    2818 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/addons for local assets ...
	I0918 12:17:37.094495    2818 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17263-1251/.minikube/files for local assets ...
	I0918 12:17:37.094598    2818 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem -> 16682.pem in /etc/ssl/certs
	I0918 12:17:37.094603    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem -> /etc/ssl/certs/16682.pem
	I0918 12:17:37.094762    2818 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 12:17:37.097405    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem --> /etc/ssl/certs/16682.pem (1708 bytes)
	I0918 12:17:37.104021    2818 start.go:303] post-start completed in 42.85325ms
	I0918 12:17:37.104427    2818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/config.json ...
	I0918 12:17:37.104584    2818 start.go:128] duration metric: createHost completed in 15.261575042s
	I0918 12:17:37.104615    2818 main.go:141] libmachine: Using SSH client type: native
	I0918 12:17:37.104823    2818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1031f4760] 0x1031f6ed0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0918 12:17:37.104827    2818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0918 12:17:37.163457    2818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695064657.609484877
	
	I0918 12:17:37.163465    2818 fix.go:206] guest clock: 1695064657.609484877
	I0918 12:17:37.163469    2818 fix.go:219] Guest: 2023-09-18 12:17:37.609484877 -0700 PDT Remote: 2023-09-18 12:17:37.104586 -0700 PDT m=+24.153161292 (delta=504.898877ms)
	I0918 12:17:37.163483    2818 fix.go:190] guest clock delta is within tolerance: 504.898877ms
	I0918 12:17:37.163486    2818 start.go:83] releasing machines lock for "ingress-addon-legacy-356000", held for 15.320526958s
	I0918 12:17:37.163741    2818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 12:17:37.163745    2818 ssh_runner.go:195] Run: cat /version.json
	I0918 12:17:37.163752    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/id_rsa Username:docker}
	I0918 12:17:37.163761    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/id_rsa Username:docker}
	I0918 12:17:37.239161    2818 ssh_runner.go:195] Run: systemctl --version
	I0918 12:17:37.241244    2818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 12:17:37.243181    2818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 12:17:37.243210    2818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0918 12:17:37.246346    2818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0918 12:17:37.251038    2818 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 12:17:37.251045    2818 start.go:469] detecting cgroup driver to use...
	I0918 12:17:37.251119    2818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 12:17:37.257844    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0918 12:17:37.261416    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 12:17:37.264736    2818 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 12:17:37.264760    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 12:17:37.267542    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 12:17:37.270585    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 12:17:37.274026    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 12:17:37.277516    2818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 12:17:37.280886    2818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 12:17:37.283808    2818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 12:17:37.286591    2818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 12:17:37.289972    2818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:17:37.355599    2818 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 12:17:37.364563    2818 start.go:469] detecting cgroup driver to use...
	I0918 12:17:37.364627    2818 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 12:17:37.369847    2818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 12:17:37.374604    2818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 12:17:37.382584    2818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 12:17:37.387767    2818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 12:17:37.392387    2818 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 12:17:37.428906    2818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 12:17:37.433677    2818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 12:17:37.438946    2818 ssh_runner.go:195] Run: which cri-dockerd
	I0918 12:17:37.440481    2818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 12:17:37.442949    2818 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 12:17:37.448024    2818 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 12:17:37.512854    2818 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 12:17:37.584226    2818 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 12:17:37.584243    2818 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0918 12:17:37.589518    2818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:17:37.651619    2818 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 12:17:38.802173    2818 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.150549792s)
	I0918 12:17:38.802249    2818 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 12:17:38.811644    2818 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 12:17:38.823483    2818 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I0918 12:17:38.823562    2818 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0918 12:17:38.824904    2818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 12:17:38.828656    2818 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0918 12:17:38.828704    2818 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 12:17:38.833820    2818 docker.go:636] Got preloaded images: 
	I0918 12:17:38.833828    2818 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0918 12:17:38.833867    2818 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 12:17:38.837557    2818 ssh_runner.go:195] Run: which lz4
	I0918 12:17:38.838995    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0918 12:17:38.839092    2818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0918 12:17:38.840588    2818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 12:17:38.840603    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0918 12:17:40.507191    2818 docker.go:600] Took 1.668148 seconds to copy over tarball
	I0918 12:17:40.507250    2818 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 12:17:41.794319    2818 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.287065s)
	I0918 12:17:41.794334    2818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 12:17:41.816034    2818 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 12:17:41.819281    2818 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0918 12:17:41.827449    2818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:17:41.890583    2818 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 12:17:43.428316    2818 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.537731083s)
	I0918 12:17:43.428411    2818 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 12:17:43.434281    2818 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0918 12:17:43.434291    2818 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0918 12:17:43.434295    2818 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 12:17:43.441190    2818 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0918 12:17:43.441217    2818 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0918 12:17:43.441257    2818 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0918 12:17:43.441319    2818 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0918 12:17:43.441494    2818 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 12:17:43.441545    2818 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0918 12:17:43.446008    2818 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:17:43.447106    2818 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0918 12:17:43.454832    2818 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0918 12:17:43.454906    2818 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 12:17:43.454906    2818 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0918 12:17:43.455799    2818 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0918 12:17:43.455819    2818 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0918 12:17:43.457272    2818 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 12:17:43.458251    2818 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:17:43.458328    2818 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W0918 12:17:44.093287    2818 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:44.093435    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0918 12:17:44.099652    2818 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0918 12:17:44.099674    2818 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0918 12:17:44.099723    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0918 12:17:44.109838    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0918 12:17:44.143995    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0918 12:17:44.150351    2818 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0918 12:17:44.150375    2818 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0918 12:17:44.150433    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0918 12:17:44.160234    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0918 12:17:44.290812    2818 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:44.290932    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0918 12:17:44.297307    2818 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0918 12:17:44.297343    2818 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0918 12:17:44.297406    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0918 12:17:44.303775    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0918 12:17:44.505823    2818 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:44.505980    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0918 12:17:44.512475    2818 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0918 12:17:44.512507    2818 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0918 12:17:44.512541    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0918 12:17:44.518644    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0918 12:17:44.705615    2818 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:44.705725    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0918 12:17:44.712212    2818 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0918 12:17:44.712239    2818 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0918 12:17:44.712287    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0918 12:17:44.720345    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0918 12:17:44.892192    2818 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:44.892304    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 12:17:44.898460    2818 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0918 12:17:44.898491    2818 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 12:17:44.898533    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 12:17:44.904532    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0918 12:17:45.298649    2818 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:45.299242    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0918 12:17:45.317324    2818 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0918 12:17:45.317370    2818 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0918 12:17:45.317469    2818 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0918 12:17:45.331283    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0918 12:17:45.596955    2818 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 12:17:45.597506    2818 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:17:45.621457    2818 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0918 12:17:45.621521    2818 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:17:45.621660    2818 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:17:45.647021    2818 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 12:17:45.647116    2818 cache_images.go:92] LoadImages completed in 2.212836375s
	W0918 12:17:45.647186    2818 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0918 12:17:45.647282    2818 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 12:17:45.662179    2818 cni.go:84] Creating CNI manager for ""
	I0918 12:17:45.662193    2818 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:17:45.662206    2818 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 12:17:45.662223    2818 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-356000 NodeName:ingress-addon-legacy-356000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 12:17:45.662346    2818 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-356000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 12:17:45.662412    2818 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-356000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 12:17:45.662479    2818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0918 12:17:45.667165    2818 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 12:17:45.667212    2818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 12:17:45.671044    2818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0918 12:17:45.677513    2818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0918 12:17:45.683549    2818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0918 12:17:45.689262    2818 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0918 12:17:45.690572    2818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 12:17:45.694344    2818 certs.go:56] Setting up /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000 for IP: 192.168.105.6
	I0918 12:17:45.694356    2818 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac81ee65979b8c4f5ece6243c3a0190531689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.694498    2818 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key
	I0918 12:17:45.694548    2818 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key
	I0918 12:17:45.694577    2818 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.key
	I0918 12:17:45.694589    2818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt with IP's: []
	I0918 12:17:45.784862    2818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt ...
	I0918 12:17:45.784866    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: {Name:mke28d17b600d84baaaee91ae3f45784ab469067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.785102    2818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.key ...
	I0918 12:17:45.785106    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.key: {Name:mk591fe10f7bbb96d417b618348953b566c8f852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.785227    2818 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key.b354f644
	I0918 12:17:45.785234    2818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 12:17:45.851765    2818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt.b354f644 ...
	I0918 12:17:45.851768    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt.b354f644: {Name:mke6be88daeb268cf4eb2c5896b69064482acd24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.851931    2818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key.b354f644 ...
	I0918 12:17:45.851935    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key.b354f644: {Name:mk0eed7c631384ea8e245b35b7685a1766f6d811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.852045    2818 certs.go:337] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt
	I0918 12:17:45.852149    2818 certs.go:341] copying /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key
	I0918 12:17:45.852256    2818 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.key
	I0918 12:17:45.852265    2818 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.crt with IP's: []
	I0918 12:17:45.988661    2818 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.crt ...
	I0918 12:17:45.988665    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.crt: {Name:mke725a067851c2e2835a8721aa1dee38f3354a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.988795    2818 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.key ...
	I0918 12:17:45.988799    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.key: {Name:mk7aa559355f2f5f83203c43dbde03b7a9bf57f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:17:45.988909    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 12:17:45.988927    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 12:17:45.988942    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 12:17:45.988955    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 12:17:45.988967    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 12:17:45.988979    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 12:17:45.988990    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 12:17:45.989002    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 12:17:45.989086    2818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668.pem (1338 bytes)
	W0918 12:17:45.989114    2818 certs.go:433] ignoring /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668_empty.pem, impossibly tiny 0 bytes
	I0918 12:17:45.989122    2818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 12:17:45.989148    2818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem (1082 bytes)
	I0918 12:17:45.989171    2818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem (1123 bytes)
	I0918 12:17:45.989196    2818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/certs/key.pem (1679 bytes)
	I0918 12:17:45.989253    2818 certs.go:437] found cert: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem (1708 bytes)
	I0918 12:17:45.989278    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:45.989289    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668.pem -> /usr/share/ca-certificates/1668.pem
	I0918 12:17:45.989304    2818 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem -> /usr/share/ca-certificates/16682.pem
	I0918 12:17:45.989662    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 12:17:45.997522    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 12:17:46.004889    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 12:17:46.011865    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 12:17:46.018541    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 12:17:46.025667    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 12:17:46.032938    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 12:17:46.039685    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 12:17:46.046345    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 12:17:46.053629    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/1668.pem --> /usr/share/ca-certificates/1668.pem (1338 bytes)
	I0918 12:17:46.060984    2818 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/ssl/certs/16682.pem --> /usr/share/ca-certificates/16682.pem (1708 bytes)
	I0918 12:17:46.067928    2818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 12:17:46.072601    2818 ssh_runner.go:195] Run: openssl version
	I0918 12:17:46.074587    2818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 12:17:46.078115    2818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:46.079774    2818 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:46.079793    2818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:17:46.081560    2818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 12:17:46.084784    2818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1668.pem && ln -fs /usr/share/ca-certificates/1668.pem /etc/ssl/certs/1668.pem"
	I0918 12:17:46.087835    2818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1668.pem
	I0918 12:17:46.089375    2818 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:13 /usr/share/ca-certificates/1668.pem
	I0918 12:17:46.089405    2818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1668.pem
	I0918 12:17:46.091386    2818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1668.pem /etc/ssl/certs/51391683.0"
	I0918 12:17:46.094596    2818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16682.pem && ln -fs /usr/share/ca-certificates/16682.pem /etc/ssl/certs/16682.pem"
	I0918 12:17:46.098006    2818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16682.pem
	I0918 12:17:46.099612    2818 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:13 /usr/share/ca-certificates/16682.pem
	I0918 12:17:46.099634    2818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16682.pem
	I0918 12:17:46.101446    2818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16682.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 12:17:46.104923    2818 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 12:17:46.106198    2818 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 12:17:46.106226    2818 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-356000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:17:46.106306    2818 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 12:17:46.111871    2818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 12:17:46.114780    2818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 12:17:46.117924    2818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 12:17:46.121042    2818 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 12:17:46.121055    2818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0918 12:17:46.145177    2818 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0918 12:17:46.145289    2818 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 12:17:46.226841    2818 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 12:17:46.226909    2818 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 12:17:46.226985    2818 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 12:17:46.272784    2818 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 12:17:46.273373    2818 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 12:17:46.273432    2818 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 12:17:46.343866    2818 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 12:17:46.348040    2818 out.go:204]   - Generating certificates and keys ...
	I0918 12:17:46.348086    2818 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 12:17:46.348122    2818 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 12:17:46.517219    2818 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 12:17:46.645167    2818 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 12:17:46.684416    2818 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 12:17:46.750200    2818 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 12:17:46.933946    2818 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 12:17:46.934088    2818 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-356000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0918 12:17:47.067947    2818 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 12:17:47.068017    2818 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-356000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0918 12:17:47.181856    2818 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 12:17:47.295729    2818 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 12:17:47.400361    2818 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 12:17:47.400512    2818 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 12:17:47.492507    2818 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 12:17:47.828603    2818 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 12:17:47.946623    2818 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 12:17:48.078078    2818 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 12:17:48.078412    2818 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 12:17:48.081706    2818 out.go:204]   - Booting up control plane ...
	I0918 12:17:48.081765    2818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 12:17:48.082328    2818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 12:17:48.082870    2818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 12:17:48.083449    2818 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 12:17:48.084948    2818 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 12:18:00.092587    2818 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.006598 seconds
	I0918 12:18:00.092927    2818 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 12:18:00.117695    2818 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 12:18:00.633193    2818 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 12:18:00.633301    2818 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-356000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0918 12:18:01.146383    2818 kubeadm.go:322] [bootstrap-token] Using token: v8hty8.2raazka23nvpy77f
	I0918 12:18:01.150333    2818 out.go:204]   - Configuring RBAC rules ...
	I0918 12:18:01.150494    2818 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 12:18:01.160946    2818 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 12:18:01.167854    2818 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 12:18:01.169914    2818 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 12:18:01.171900    2818 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 12:18:01.173925    2818 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 12:18:01.179952    2818 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 12:18:01.356633    2818 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 12:18:01.563136    2818 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 12:18:01.564252    2818 kubeadm.go:322] 
	I0918 12:18:01.564294    2818 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 12:18:01.564301    2818 kubeadm.go:322] 
	I0918 12:18:01.564341    2818 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 12:18:01.564344    2818 kubeadm.go:322] 
	I0918 12:18:01.564357    2818 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 12:18:01.564388    2818 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 12:18:01.564414    2818 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 12:18:01.564417    2818 kubeadm.go:322] 
	I0918 12:18:01.564457    2818 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 12:18:01.564528    2818 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 12:18:01.564567    2818 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 12:18:01.564571    2818 kubeadm.go:322] 
	I0918 12:18:01.564625    2818 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 12:18:01.564680    2818 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 12:18:01.564684    2818 kubeadm.go:322] 
	I0918 12:18:01.564731    2818 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v8hty8.2raazka23nvpy77f \
	I0918 12:18:01.564785    2818 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 \
	I0918 12:18:01.564799    2818 kubeadm.go:322]     --control-plane 
	I0918 12:18:01.564805    2818 kubeadm.go:322] 
	I0918 12:18:01.564855    2818 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 12:18:01.564858    2818 kubeadm.go:322] 
	I0918 12:18:01.564902    2818 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v8hty8.2raazka23nvpy77f \
	I0918 12:18:01.564977    2818 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ba7ba5242719fff40b98ade8a053fc0c6dded3185e28e2742ca7444c7c25a7a5 
	I0918 12:18:01.565098    2818 kubeadm.go:322] W0918 19:17:46.591146    1419 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0918 12:18:01.565210    2818 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0918 12:18:01.565294    2818 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I0918 12:18:01.565365    2818 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 12:18:01.565434    2818 kubeadm.go:322] W0918 19:17:48.528486    1419 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0918 12:18:01.565514    2818 kubeadm.go:322] W0918 19:17:48.529113    1419 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0918 12:18:01.565521    2818 cni.go:84] Creating CNI manager for ""
	I0918 12:18:01.565530    2818 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:18:01.565542    2818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 12:18:01.565619    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:01.565623    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=ingress-addon-legacy-356000 minikube.k8s.io/updated_at=2023_09_18T12_18_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:01.631016    2818 ops.go:34] apiserver oom_adj: -16
	I0918 12:18:01.631049    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:01.664389    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:02.199240    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:02.699078    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:03.199212    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:03.699273    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:04.199117    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:04.699363    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:05.199260    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:05.699267    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:06.199384    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:06.699043    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:07.199174    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:07.699260    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:08.198935    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:08.699102    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:09.199275    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:09.698921    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:10.199313    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:10.699199    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:11.199166    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:11.699222    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:12.199216    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:12.699205    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:13.197534    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:13.699109    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:14.199201    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:14.699154    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:15.199192    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:15.699169    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:16.199160    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:16.699111    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:17.199096    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:17.699081    2818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:18:17.799202    2818 kubeadm.go:1081] duration metric: took 16.233809875s to wait for elevateKubeSystemPrivileges.
	I0918 12:18:17.799221    2818 kubeadm.go:406] StartCluster complete in 31.693318542s
	I0918 12:18:17.799233    2818 settings.go:142] acquiring lock: {Name:mke420f28dda4f7a752738b3e6d571dc4216779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:18:17.799322    2818 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:18:17.799724    2818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/kubeconfig: {Name:mk07020c5b974cf07ca0cda25f72a521eb245fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:18:17.799955    2818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 12:18:17.800000    2818 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0918 12:18:17.800063    2818 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-356000"
	I0918 12:18:17.800067    2818 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-356000"
	I0918 12:18:17.800070    2818 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-356000"
	I0918 12:18:17.800079    2818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-356000"
	I0918 12:18:17.800091    2818 host.go:66] Checking if "ingress-addon-legacy-356000" exists ...
	I0918 12:18:17.800215    2818 kapi.go:59] client config for ingress-addon-legacy-356000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.key", CAFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044b4c30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 12:18:17.800308    2818 config.go:182] Loaded profile config "ingress-addon-legacy-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0918 12:18:17.800631    2818 cert_rotation.go:137] Starting client certificate rotation controller
	I0918 12:18:17.801194    2818 kapi.go:59] client config for ingress-addon-legacy-356000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.key", CAFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044b4c30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 12:18:17.806058    2818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:18:17.811104    2818 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 12:18:17.811111    2818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 12:18:17.811121    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/id_rsa Username:docker}
	I0918 12:18:17.816755    2818 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-356000"
	I0918 12:18:17.816775    2818 host.go:66] Checking if "ingress-addon-legacy-356000" exists ...
	I0918 12:18:17.817426    2818 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 12:18:17.817433    2818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 12:18:17.817440    2818 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/ingress-addon-legacy-356000/id_rsa Username:docker}
	I0918 12:18:17.822469    2818 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-356000" context rescaled to 1 replicas
	I0918 12:18:17.822491    2818 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:18:17.825903    2818 out.go:177] * Verifying Kubernetes components...
	I0918 12:18:17.834062    2818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 12:18:17.876885    2818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 12:18:17.877098    2818 kapi.go:59] client config for ingress-addon-legacy-356000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.key", CAFile:"/Users/jenkins/minikube-integration/17263-1251/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044b4c30), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 12:18:17.877233    2818 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-356000" to be "Ready" ...
	I0918 12:18:17.878693    2818 node_ready.go:49] node "ingress-addon-legacy-356000" has status "Ready":"True"
	I0918 12:18:17.878699    2818 node_ready.go:38] duration metric: took 1.458792ms waiting for node "ingress-addon-legacy-356000" to be "Ready" ...
	I0918 12:18:17.878703    2818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 12:18:17.882317    2818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 12:18:17.884336    2818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-q8rzc" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:17.900069    2818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 12:18:18.062343    2818 start.go:917] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0918 12:18:18.172836    2818 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0918 12:18:18.180809    2818 addons.go:502] enable addons completed in 380.81725ms: enabled=[default-storageclass storage-provisioner]
	I0918 12:18:19.899133    2818 pod_ready.go:102] pod "coredns-66bff467f8-q8rzc" in "kube-system" namespace has status "Ready":"False"
	I0918 12:18:21.907775    2818 pod_ready.go:102] pod "coredns-66bff467f8-q8rzc" in "kube-system" namespace has status "Ready":"False"
	I0918 12:18:23.899116    2818 pod_ready.go:92] pod "coredns-66bff467f8-q8rzc" in "kube-system" namespace has status "Ready":"True"
	I0918 12:18:23.899134    2818 pod_ready.go:81] duration metric: took 6.014851125s waiting for pod "coredns-66bff467f8-q8rzc" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.899141    2818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.902370    2818 pod_ready.go:92] pod "etcd-ingress-addon-legacy-356000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:18:23.902381    2818 pod_ready.go:81] duration metric: took 3.234292ms waiting for pod "etcd-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.902387    2818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.905883    2818 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-356000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:18:23.905893    2818 pod_ready.go:81] duration metric: took 3.500625ms waiting for pod "kube-apiserver-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.905899    2818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.909264    2818 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-356000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:18:23.909272    2818 pod_ready.go:81] duration metric: took 3.3695ms waiting for pod "kube-controller-manager-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.909278    2818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhtk4" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.912105    2818 pod_ready.go:92] pod "kube-proxy-rhtk4" in "kube-system" namespace has status "Ready":"True"
	I0918 12:18:23.912115    2818 pod_ready.go:81] duration metric: took 2.833042ms waiting for pod "kube-proxy-rhtk4" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:23.912120    2818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:24.096535    2818 request.go:629] Waited for 184.3405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-356000
	I0918 12:18:24.295143    2818 request.go:629] Waited for 193.010958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-356000
	I0918 12:18:24.302432    2818 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-356000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:18:24.302466    2818 pod_ready.go:81] duration metric: took 390.338875ms waiting for pod "kube-scheduler-ingress-addon-legacy-356000" in "kube-system" namespace to be "Ready" ...
	I0918 12:18:24.302490    2818 pod_ready.go:38] duration metric: took 6.423841875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 12:18:24.302548    2818 api_server.go:52] waiting for apiserver process to appear ...
	I0918 12:18:24.302825    2818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 12:18:24.320314    2818 api_server.go:72] duration metric: took 6.497863834s to wait for apiserver process to appear ...
	I0918 12:18:24.320353    2818 api_server.go:88] waiting for apiserver healthz status ...
	I0918 12:18:24.320373    2818 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0918 12:18:24.329038    2818 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0918 12:18:24.330061    2818 api_server.go:141] control plane version: v1.18.20
	I0918 12:18:24.330077    2818 api_server.go:131] duration metric: took 9.716ms to wait for apiserver health ...
	I0918 12:18:24.330085    2818 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 12:18:24.496536    2818 request.go:629] Waited for 166.368875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0918 12:18:24.508637    2818 system_pods.go:59] 7 kube-system pods found
	I0918 12:18:24.508675    2818 system_pods.go:61] "coredns-66bff467f8-q8rzc" [3b2fae9f-5c05-4d07-b224-d8164540961e] Running
	I0918 12:18:24.508685    2818 system_pods.go:61] "etcd-ingress-addon-legacy-356000" [fea0c2b4-ec0d-46c0-bf71-c8a21f9161eb] Running
	I0918 12:18:24.508706    2818 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-356000" [9b332989-cedc-4bc9-a9a5-74db9a8ba3b5] Running
	I0918 12:18:24.508717    2818 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-356000" [2c953477-1006-4a87-a190-ea9606f9d24c] Running
	I0918 12:18:24.508726    2818 system_pods.go:61] "kube-proxy-rhtk4" [266b2186-74cf-417e-85e1-02c97a57ec03] Running
	I0918 12:18:24.508738    2818 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-356000" [f1f0d1f0-17bf-4e15-8d58-1861b39734f5] Running
	I0918 12:18:24.508750    2818 system_pods.go:61] "storage-provisioner" [26d72fc7-9850-4aad-a5e3-8d9a23616d55] Running
	I0918 12:18:24.508761    2818 system_pods.go:74] duration metric: took 178.669791ms to wait for pod list to return data ...
	I0918 12:18:24.508775    2818 default_sa.go:34] waiting for default service account to be created ...
	I0918 12:18:24.696529    2818 request.go:629] Waited for 187.6455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0918 12:18:24.702596    2818 default_sa.go:45] found service account: "default"
	I0918 12:18:24.702627    2818 default_sa.go:55] duration metric: took 193.842875ms for default service account to be created ...
	I0918 12:18:24.702643    2818 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 12:18:24.896493    2818 request.go:629] Waited for 193.772542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0918 12:18:24.909898    2818 system_pods.go:86] 7 kube-system pods found
	I0918 12:18:24.909931    2818 system_pods.go:89] "coredns-66bff467f8-q8rzc" [3b2fae9f-5c05-4d07-b224-d8164540961e] Running
	I0918 12:18:24.909943    2818 system_pods.go:89] "etcd-ingress-addon-legacy-356000" [fea0c2b4-ec0d-46c0-bf71-c8a21f9161eb] Running
	I0918 12:18:24.909954    2818 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-356000" [9b332989-cedc-4bc9-a9a5-74db9a8ba3b5] Running
	I0918 12:18:24.909964    2818 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-356000" [2c953477-1006-4a87-a190-ea9606f9d24c] Running
	I0918 12:18:24.909973    2818 system_pods.go:89] "kube-proxy-rhtk4" [266b2186-74cf-417e-85e1-02c97a57ec03] Running
	I0918 12:18:24.909988    2818 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-356000" [f1f0d1f0-17bf-4e15-8d58-1861b39734f5] Running
	I0918 12:18:24.909996    2818 system_pods.go:89] "storage-provisioner" [26d72fc7-9850-4aad-a5e3-8d9a23616d55] Running
	I0918 12:18:24.910012    2818 system_pods.go:126] duration metric: took 207.357541ms to wait for k8s-apps to be running ...
	I0918 12:18:24.910022    2818 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 12:18:24.910225    2818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 12:18:24.927812    2818 system_svc.go:56] duration metric: took 17.786542ms WaitForService to wait for kubelet.
	I0918 12:18:24.927831    2818 kubeadm.go:581] duration metric: took 7.105395334s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 12:18:24.927854    2818 node_conditions.go:102] verifying NodePressure condition ...
	I0918 12:18:25.096577    2818 request.go:629] Waited for 168.589083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0918 12:18:25.103879    2818 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0918 12:18:25.103919    2818 node_conditions.go:123] node cpu capacity is 2
	I0918 12:18:25.103943    2818 node_conditions.go:105] duration metric: took 176.083167ms to run NodePressure ...
	I0918 12:18:25.103969    2818 start.go:228] waiting for startup goroutines ...
	I0918 12:18:25.103983    2818 start.go:233] waiting for cluster config update ...
	I0918 12:18:25.104015    2818 start.go:242] writing updated cluster config ...
	I0918 12:18:25.105208    2818 ssh_runner.go:195] Run: rm -f paused
	I0918 12:18:25.167453    2818 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0918 12:18:25.170566    2818 out.go:177] 
	W0918 12:18:25.174573    2818 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0918 12:18:25.177470    2818 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0918 12:18:25.183589    2818 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-356000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-18 19:17:33 UTC, ends at Mon 2023-09-18 19:19:35 UTC. --
	Sep 18 19:19:04 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:04.580337854Z" level=info msg="shim disconnected" id=2cae845ab213bddc60601ff928df0363ad7ad7c743c4b0690e79350ffcb78fdb namespace=moby
	Sep 18 19:19:04 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:04.580374978Z" level=warning msg="cleaning up after shim disconnected" id=2cae845ab213bddc60601ff928df0363ad7ad7c743c4b0690e79350ffcb78fdb namespace=moby
	Sep 18 19:19:04 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:04.580380228Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.854801941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.854904107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.854930607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.854938232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.900769829Z" level=info msg="shim disconnected" id=f43e083a9b54f0ae2b5bc6f13843046118b0fd5fbc2d7160949ca582e9caf920 namespace=moby
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.900797537Z" level=warning msg="cleaning up after shim disconnected" id=f43e083a9b54f0ae2b5bc6f13843046118b0fd5fbc2d7160949ca582e9caf920 namespace=moby
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:16.900801745Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:19:16 ingress-addon-legacy-356000 dockerd[1092]: time="2023-09-18T19:19:16.900953161Z" level=info msg="ignoring event" container=f43e083a9b54f0ae2b5bc6f13843046118b0fd5fbc2d7160949ca582e9caf920 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:19:17 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:17.805334144Z" level=info msg="shim disconnected" id=dc2c923c1560f95e15aff8c9738850704ecad3b39cbfb9d2b8a2d804f922dd83 namespace=moby
	Sep 18 19:19:17 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:17.805369894Z" level=warning msg="cleaning up after shim disconnected" id=dc2c923c1560f95e15aff8c9738850704ecad3b39cbfb9d2b8a2d804f922dd83 namespace=moby
	Sep 18 19:19:17 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:17.805375352Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:19:17 ingress-addon-legacy-356000 dockerd[1092]: time="2023-09-18T19:19:17.805502268Z" level=info msg="ignoring event" container=dc2c923c1560f95e15aff8c9738850704ecad3b39cbfb9d2b8a2d804f922dd83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1092]: time="2023-09-18T19:19:30.288876937Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=bbaf4574d92620529c0c93abb93032bcfcd8c208f63505c6dfbc62a69cbfb9aa
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1092]: time="2023-09-18T19:19:30.296377002Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=bbaf4574d92620529c0c93abb93032bcfcd8c208f63505c6dfbc62a69cbfb9aa
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1092]: time="2023-09-18T19:19:30.391800351Z" level=info msg="ignoring event" container=bbaf4574d92620529c0c93abb93032bcfcd8c208f63505c6dfbc62a69cbfb9aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:30.392089350Z" level=info msg="shim disconnected" id=bbaf4574d92620529c0c93abb93032bcfcd8c208f63505c6dfbc62a69cbfb9aa namespace=moby
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:30.392135058Z" level=warning msg="cleaning up after shim disconnected" id=bbaf4574d92620529c0c93abb93032bcfcd8c208f63505c6dfbc62a69cbfb9aa namespace=moby
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:30.392142558Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:30.420265239Z" level=info msg="shim disconnected" id=250305d851aab672980dade677b45c2db518922fd9b66d175ca3d681f429ec0d namespace=moby
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:30.420295073Z" level=warning msg="cleaning up after shim disconnected" id=250305d851aab672980dade677b45c2db518922fd9b66d175ca3d681f429ec0d namespace=moby
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1092]: time="2023-09-18T19:19:30.420217448Z" level=info msg="ignoring event" container=250305d851aab672980dade677b45c2db518922fd9b66d175ca3d681f429ec0d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:19:30 ingress-addon-legacy-356000 dockerd[1098]: time="2023-09-18T19:19:30.420299323Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	f43e083a9b54f       a39a074194753                                                                                                      19 seconds ago       Exited              hello-world-app           2                   c6df2ce47dd03
	876ffc9b48b16       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      40 seconds ago       Running             nginx                     0                   cfe0b407ec017
	bbaf4574d9262       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   58 seconds ago       Exited              controller                0                   250305d851aab
	ef792b9852305       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   de5af0cb29882
	bfa043e29a836       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   5203644d681a3
	a5b4369b693ba       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   9d31f24448198
	3ecd34cb6e9b1       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   fb6a33bdb31a9
	ea08cc3d1e996       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   bdee3c5248e25
	65b4cc4706154       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   bf652ae013866
	b080b188d76b1       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   ef4f184f1df1d
	62eee85246d74       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   14d8a9116548f
	0c3817f6f8508       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   dff36f52c697b
	
	* 
	* ==> coredns [3ecd34cb6e9b] <==
	* [INFO] 172.17.0.1:53467 - 23786 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046083s
	[INFO] 172.17.0.1:53467 - 49407 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046416s
	[INFO] 172.17.0.1:53467 - 57840 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031125s
	[INFO] 172.17.0.1:53467 - 1492 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055333s
	[INFO] 172.17.0.1:36430 - 23322 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048874s
	[INFO] 172.17.0.1:36430 - 8807 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012542s
	[INFO] 172.17.0.1:36430 - 45653 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010666s
	[INFO] 172.17.0.1:36430 - 47107 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010624s
	[INFO] 172.17.0.1:36430 - 30893 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009417s
	[INFO] 172.17.0.1:36430 - 8256 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009167s
	[INFO] 172.17.0.1:27726 - 12154 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010167s
	[INFO] 172.17.0.1:36430 - 59044 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014125s
	[INFO] 172.17.0.1:27726 - 16018 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015667s
	[INFO] 172.17.0.1:27726 - 14379 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011042s
	[INFO] 172.17.0.1:27726 - 16644 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009666s
	[INFO] 172.17.0.1:27726 - 37003 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008834s
	[INFO] 172.17.0.1:27726 - 45768 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012042s
	[INFO] 172.17.0.1:27726 - 42774 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000016417s
	[INFO] 172.17.0.1:51520 - 8714 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044166s
	[INFO] 172.17.0.1:51520 - 9322 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000020791s
	[INFO] 172.17.0.1:51520 - 43257 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082999s
	[INFO] 172.17.0.1:51520 - 34606 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000016625s
	[INFO] 172.17.0.1:51520 - 5383 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088248s
	[INFO] 172.17.0.1:51520 - 23857 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000018333s
	[INFO] 172.17.0.1:51520 - 23981 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046249s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-356000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-356000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=ingress-addon-legacy-356000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T12_18_01_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 19:17:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-356000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:19:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:19:07 +0000   Mon, 18 Sep 2023 19:17:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:19:07 +0000   Mon, 18 Sep 2023 19:17:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:19:07 +0000   Mon, 18 Sep 2023 19:17:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:19:07 +0000   Mon, 18 Sep 2023 19:18:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-356000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d08ff2d43c6419f91fdf30a1e38e978
	  System UUID:                6d08ff2d43c6419f91fdf30a1e38e978
	  Boot ID:                    179819aa-3167-4595-99e9-0e46c2d1c89f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-lpns5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-q8rzc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     78s
	  kube-system                 etcd-ingress-addon-legacy-356000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-356000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-356000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-rhtk4                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-356000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 100s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s (x4 over 100s)  kubelet     Node ingress-addon-legacy-356000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x4 over 100s)  kubelet     Node ingress-addon-legacy-356000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x3 over 100s)  kubelet     Node ingress-addon-legacy-356000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 88s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                  kubelet     Node ingress-addon-legacy-356000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                  kubelet     Node ingress-addon-legacy-356000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                  kubelet     Node ingress-addon-legacy-356000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                  kubelet     Node ingress-addon-legacy-356000 status is now: NodeReady
	  Normal  Starting                 77s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep18 19:17] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.649618] EINJ: EINJ table not found.
	[  +0.529566] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043747] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000845] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.232376] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.060373] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +0.435544] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.155386] systemd-fstab-generator[750]: Ignoring "noauto" for root device
	[  +0.072070] systemd-fstab-generator[847]: Ignoring "noauto" for root device
	[  +0.070118] systemd-fstab-generator[860]: Ignoring "noauto" for root device
	[  +4.237777] systemd-fstab-generator[1065]: Ignoring "noauto" for root device
	[  +1.512879] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.930562] systemd-fstab-generator[1540]: Ignoring "noauto" for root device
	[  +8.557171] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.084661] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep18 19:18] systemd-fstab-generator[2644]: Ignoring "noauto" for root device
	[ +17.041157] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.576892] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.123278] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep18 19:19] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [b080b188d76b] <==
	* raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/18 19:17:56 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-18 19:17:56.431284 W | auth: simple token is not cryptographically signed
	2023-09-18 19:17:56.432434 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-18 19:17:56.433041 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-18 19:17:56.434293 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-18 19:17:56.434346 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-18 19:17:56.434389 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-09-18 19:17:56.434426 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/18 19:17:56 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/18 19:17:56 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-18 19:17:56.941760 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-18 19:17:56.953756 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-18 19:17:56.953810 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-18 19:17:56.957764 I | etcdserver: published {Name:ingress-addon-legacy-356000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-18 19:17:56.957780 I | embed: ready to serve client requests
	2023-09-18 19:17:56.961803 I | embed: ready to serve client requests
	2023-09-18 19:17:56.970305 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-18 19:17:56.973459 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  19:19:35 up 2 min,  0 users,  load average: 0.44, 0.21, 0.08
	Linux ingress-addon-legacy-356000 5.10.57 #1 SMP PREEMPT Fri Sep 15 19:03:18 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [62eee85246d7] <==
	* I0918 19:17:58.879115       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0918 19:17:58.906467       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0918 19:17:58.957759       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0918 19:17:58.958259       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0918 19:17:58.962266       1 cache.go:39] Caches are synced for autoregister controller
	I0918 19:17:58.962382       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 19:17:58.979699       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0918 19:17:59.859323       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0918 19:17:59.859393       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0918 19:17:59.879200       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0918 19:17:59.897279       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0918 19:17:59.897359       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0918 19:18:00.022144       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 19:18:00.035553       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0918 19:18:00.131082       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0918 19:18:00.131526       1 controller.go:609] quota admission added evaluator for: endpoints
	I0918 19:18:00.133262       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 19:18:01.160258       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0918 19:18:01.798036       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0918 19:18:01.989974       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0918 19:18:07.742694       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:18:17.729196       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0918 19:18:17.767821       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0918 19:18:25.461666       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0918 19:18:52.467298       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [0c3817f6f850] <==
	* I0918 19:18:17.784048       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6489a424-b798-4fd3-93d3-aa5d106b0201", APIVersion:"apps/v1", ResourceVersion:"338", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-q8rzc
	I0918 19:18:17.827758       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e35e5b81-b183-4092-b48d-509651a3c08e", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0918 19:18:17.840829       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6489a424-b798-4fd3-93d3-aa5d106b0201", APIVersion:"apps/v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-cqrvw
	I0918 19:18:17.847311       1 request.go:621] Throttling request took 1.050119167s, request: GET:https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	I0918 19:18:17.862602       1 shared_informer.go:230] Caches are synced for taint 
	I0918 19:18:17.862672       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0918 19:18:17.862694       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-356000. Assuming now as a timestamp.
	I0918 19:18:17.862712       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0918 19:18:17.862738       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-356000", UID:"f456b6af-357f-47e0-882d-bb8991f2e862", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-356000 event: Registered Node ingress-addon-legacy-356000 in Controller
	I0918 19:18:17.862756       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0918 19:18:17.903021       1 shared_informer.go:230] Caches are synced for resource quota 
	I0918 19:18:17.942197       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0918 19:18:17.942213       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0918 19:18:18.447239       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0918 19:18:18.447257       1 shared_informer.go:230] Caches are synced for resource quota 
	I0918 19:18:18.736893       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0918 19:18:18.736929       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0918 19:18:25.455987       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"54e5fcab-bb86-4366-944b-5189776f7590", APIVersion:"apps/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0918 19:18:25.465242       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9440c38e-c5cb-48c7-bfc3-5d22a09d9f4e", APIVersion:"apps/v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-8ztft
	I0918 19:18:25.469702       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ac9befe1-1a0b-4995-8ec7-15729a3ae28b", APIVersion:"batch/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-m5drt
	I0918 19:18:25.488627       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ef6cd74d-5c21-494e-add9-6c9a77ba2f2f", APIVersion:"batch/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-twkkp
	I0918 19:18:29.101628       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ac9befe1-1a0b-4995-8ec7-15729a3ae28b", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0918 19:18:29.122052       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ef6cd74d-5c21-494e-add9-6c9a77ba2f2f", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0918 19:19:01.733985       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"163be8c7-ee47-4f69-b65b-c1351a53ce41", APIVersion:"apps/v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0918 19:19:01.748677       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"e32ec705-2f28-4e78-a0ac-d55c89f2fb81", APIVersion:"apps/v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-lpns5
	
	* 
	* ==> kube-proxy [ea08cc3d1e99] <==
	* W0918 19:18:18.253708       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0918 19:18:18.257874       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0918 19:18:18.257894       1 server_others.go:186] Using iptables Proxier.
	I0918 19:18:18.258046       1 server.go:583] Version: v1.18.20
	I0918 19:18:18.259172       1 config.go:315] Starting service config controller
	I0918 19:18:18.259288       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0918 19:18:18.259517       1 config.go:133] Starting endpoints config controller
	I0918 19:18:18.259546       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0918 19:18:18.361005       1 shared_informer.go:230] Caches are synced for service config 
	I0918 19:18:18.361015       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [65b4cc470615] <==
	* I0918 19:17:58.913327       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:17:58.913368       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:17:58.913402       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0918 19:17:58.915970       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:17:58.916040       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:17:58.916108       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:17:58.916155       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:17:58.916213       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:58.916253       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:58.916302       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:17:58.916412       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:17:58.916467       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:17:58.916531       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:17:58.916580       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:17:58.917392       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:17:59.734171       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:17:59.762833       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:17:59.776048       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:17:59.803850       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:17:59.819299       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:17:59.881385       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:17:59.981596       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0918 19:18:02.513601       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0918 19:18:17.799228       1 factory.go:503] pod: kube-system/coredns-66bff467f8-cqrvw is already present in the active queue
	E0918 19:18:17.818398       1 factory.go:503] pod: kube-system/coredns-66bff467f8-q8rzc is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-18 19:17:33 UTC, ends at Mon 2023-09-18 19:19:35 UTC. --
	Sep 18 19:19:16 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:16.792777    2650 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2cae845ab213bddc60601ff928df0363ad7ad7c743c4b0690e79350ffcb78fdb
	Sep 18 19:19:16 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:16.795068    2650 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e6d86d6d940b6e6d884b62eccbf825e5858d0d2dc6d8b136176810004f66f7e6
	Sep 18 19:19:16 ingress-addon-legacy-356000 kubelet[2650]: E0918 19:19:16.795970    2650 pod_workers.go:191] Error syncing pod c638346f-3bbd-409c-9658-33356930a857 ("kube-ingress-dns-minikube_kube-system(c638346f-3bbd-409c-9658-33356930a857)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(c638346f-3bbd-409c-9658-33356930a857)"
	Sep 18 19:19:16 ingress-addon-legacy-356000 kubelet[2650]: W0918 19:19:16.911911    2650 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pode916a9d6-2aa4-41b2-a04c-079c194a0849/f43e083a9b54f0ae2b5bc6f13843046118b0fd5fbc2d7160949ca582e9caf920": none of the resources are being tracked.
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:17.150097    2650 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6hsk8" (UniqueName: "kubernetes.io/secret/c638346f-3bbd-409c-9658-33356930a857-minikube-ingress-dns-token-6hsk8") pod "c638346f-3bbd-409c-9658-33356930a857" (UID: "c638346f-3bbd-409c-9658-33356930a857")
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:17.152429    2650 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c638346f-3bbd-409c-9658-33356930a857-minikube-ingress-dns-token-6hsk8" (OuterVolumeSpecName: "minikube-ingress-dns-token-6hsk8") pod "c638346f-3bbd-409c-9658-33356930a857" (UID: "c638346f-3bbd-409c-9658-33356930a857"). InnerVolumeSpecName "minikube-ingress-dns-token-6hsk8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:17.252272    2650 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6hsk8" (UniqueName: "kubernetes.io/secret/c638346f-3bbd-409c-9658-33356930a857-minikube-ingress-dns-token-6hsk8") on node "ingress-addon-legacy-356000" DevicePath ""
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: W0918 19:19:17.726231    2650 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-lpns5 through plugin: invalid network status for
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:17.733017    2650 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2cae845ab213bddc60601ff928df0363ad7ad7c743c4b0690e79350ffcb78fdb
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:17.733601    2650 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f43e083a9b54f0ae2b5bc6f13843046118b0fd5fbc2d7160949ca582e9caf920
	Sep 18 19:19:17 ingress-addon-legacy-356000 kubelet[2650]: E0918 19:19:17.735136    2650 pod_workers.go:191] Error syncing pod e916a9d6-2aa4-41b2-a04c-079c194a0849 ("hello-world-app-5f5d8b66bb-lpns5_default(e916a9d6-2aa4-41b2-a04c-079c194a0849)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lpns5_default(e916a9d6-2aa4-41b2-a04c-079c194a0849)"
	Sep 18 19:19:18 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:18.753314    2650 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e6d86d6d940b6e6d884b62eccbf825e5858d0d2dc6d8b136176810004f66f7e6
	Sep 18 19:19:18 ingress-addon-legacy-356000 kubelet[2650]: W0918 19:19:18.760040    2650 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-lpns5 through plugin: invalid network status for
	Sep 18 19:19:28 ingress-addon-legacy-356000 kubelet[2650]: E0918 19:19:28.279034    2650 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8ztft.1786146d086c7533", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8ztft", UID:"7e42a330-4052-4222-9dd9-ebbd85458214", APIVersion:"v1", ResourceVersion:"453", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-356000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a46901080f533, ext:86950217612, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a46901080f533, ext:86950217612, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8ztft.1786146d086c7533" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 18 19:19:28 ingress-addon-legacy-356000 kubelet[2650]: E0918 19:19:28.288272    2650 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-8ztft.1786146d086c7533", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-8ztft", UID:"7e42a330-4052-4222-9dd9-ebbd85458214", APIVersion:"v1", ResourceVersion:"453", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-356000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a46901080f533, ext:86950217612, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a469010ef109f, ext:86957433633, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-8ztft.1786146d086c7533" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 18 19:19:29 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:29.794494    2650 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f43e083a9b54f0ae2b5bc6f13843046118b0fd5fbc2d7160949ca582e9caf920
	Sep 18 19:19:29 ingress-addon-legacy-356000 kubelet[2650]: E0918 19:19:29.796120    2650 pod_workers.go:191] Error syncing pod e916a9d6-2aa4-41b2-a04c-079c194a0849 ("hello-world-app-5f5d8b66bb-lpns5_default(e916a9d6-2aa4-41b2-a04c-079c194a0849)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lpns5_default(e916a9d6-2aa4-41b2-a04c-079c194a0849)"
	Sep 18 19:19:30 ingress-addon-legacy-356000 kubelet[2650]: W0918 19:19:30.953382    2650 pod_container_deletor.go:77] Container "250305d851aab672980dade677b45c2db518922fd9b66d175ca3d681f429ec0d" not found in pod's containers
	Sep 18 19:19:32 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:32.470327    2650 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7e42a330-4052-4222-9dd9-ebbd85458214-webhook-cert") pod "7e42a330-4052-4222-9dd9-ebbd85458214" (UID: "7e42a330-4052-4222-9dd9-ebbd85458214")
	Sep 18 19:19:32 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:32.471303    2650 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-q2bk9" (UniqueName: "kubernetes.io/secret/7e42a330-4052-4222-9dd9-ebbd85458214-ingress-nginx-token-q2bk9") pod "7e42a330-4052-4222-9dd9-ebbd85458214" (UID: "7e42a330-4052-4222-9dd9-ebbd85458214")
	Sep 18 19:19:32 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:32.479480    2650 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e42a330-4052-4222-9dd9-ebbd85458214-ingress-nginx-token-q2bk9" (OuterVolumeSpecName: "ingress-nginx-token-q2bk9") pod "7e42a330-4052-4222-9dd9-ebbd85458214" (UID: "7e42a330-4052-4222-9dd9-ebbd85458214"). InnerVolumeSpecName "ingress-nginx-token-q2bk9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:19:32 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:32.480676    2650 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e42a330-4052-4222-9dd9-ebbd85458214-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7e42a330-4052-4222-9dd9-ebbd85458214" (UID: "7e42a330-4052-4222-9dd9-ebbd85458214"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:19:32 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:32.572266    2650 reconciler.go:319] Volume detached for volume "ingress-nginx-token-q2bk9" (UniqueName: "kubernetes.io/secret/7e42a330-4052-4222-9dd9-ebbd85458214-ingress-nginx-token-q2bk9") on node "ingress-addon-legacy-356000" DevicePath ""
	Sep 18 19:19:32 ingress-addon-legacy-356000 kubelet[2650]: I0918 19:19:32.572371    2650 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7e42a330-4052-4222-9dd9-ebbd85458214-webhook-cert") on node "ingress-addon-legacy-356000" DevicePath ""
	Sep 18 19:19:33 ingress-addon-legacy-356000 kubelet[2650]: W0918 19:19:33.807329    2650 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7e42a330-4052-4222-9dd9-ebbd85458214/volumes" does not exist
	
	* 
	* ==> storage-provisioner [a5b4369b693b] <==
	* I0918 19:18:20.081203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:18:20.086983       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:18:20.087004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:18:20.089433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:18:20.089782       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d4e14529-cc8b-43e1-9a2a-b4e806c7439b", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-356000_579f4f86-e318-4180-8876-0f155701ded1 became leader
	I0918 19:18:20.089820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-356000_579f4f86-e318-4180-8876-0f155701ded1!
	I0918 19:18:20.190355       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-356000_579f4f86-e318-4180-8876-0f155701ded1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-356000 -n ingress-addon-legacy-356000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-356000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.85s)

                                                
                                    
x
+
TestMinikubeProfile (76.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-189000 --driver=qemu2 
E0918 12:20:38.960718    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:38.967029    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:38.979066    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:39.001124    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:39.042924    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:39.124965    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:39.287007    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:39.609062    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:40.251114    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:41.533207    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:44.095581    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:49.218006    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:20:59.460369    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:21:19.942600    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-189000 --driver=qemu2 : exit status 90 (1m16.096029958s)

                                                
                                                
-- stdout --
	* [first-189000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node first-189000 in cluster first-189000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-189000 --driver=qemu2 ": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2023-09-18 12:21:48.224704 -0700 PDT m=+1804.650388542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-191000 -n second-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-191000 -n second-191000: exit status 85 (55.203333ms)

                                                
                                                
-- stdout --
	* Profile "second-191000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-191000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-191000" host is not running, skipping log retrieval (state="* Profile \"second-191000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-191000\"")
helpers_test.go:175: Cleaning up "second-191000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-191000
panic.go:523: *** TestMinikubeProfile FAILED at 2023-09-18 12:21:48.52565 -0700 PDT m=+1804.951336959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-189000 -n first-189000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-189000 -n first-189000: exit status 6 (75.304125ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:21:48.596686    3098 status.go:415] kubeconfig endpoint: extract IP: "first-189000" does not appear in /Users/jenkins/minikube-integration/17263-1251/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "first-189000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "first-189000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-189000
--- FAIL: TestMinikubeProfile (76.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (101.03s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-738000 ssh -- ls /minikube-host
E0918 12:23:22.826783    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:23:38.802513    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:38.811691    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:38.823837    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:38.846000    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:38.888160    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:38.970263    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:39.132441    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:39.454727    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:40.097161    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p mount-start-2-738000 ssh -- ls /minikube-host: exit status 1 (1m15.037977708s)

                                                
                                                
** stderr ** 
	ssh: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
mount_start_test.go:116: mount failed: "out/minikube-darwin-arm64 -p mount-start-2-738000 ssh -- ls /minikube-host" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-2-738000 -n mount-start-2-738000
E0918 12:23:41.379789    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:43.942199    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:49.064598    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:23:59.307014    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:24:00.286151    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-2-738000 -n mount-start-2-738000: exit status 3 (25.993661167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:24:06.726837    3161 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out
	E0918 12:24:06.726875    3161 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "mount-start-2-738000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (101.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (378.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-arm64 -p multinode-145000 node stop m03: (3.057899083s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status
E0918 12:28:38.798124    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:29:00.282758    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 status: exit status 7 (2m30.038341s)

                                                
                                                
-- stdout --
	multinode-145000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-145000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-145000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:27:48.245674    3382 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:27:48.245695    3382 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:29:03.247764    3382 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0918 12:29:03.247779    3382 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr
E0918 12:29:06.512096    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:30:23.325025    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:30:38.921426    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr: exit status 7 (2m30.039627541s)

                                                
                                                
-- stdout --
	multinode-145000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-145000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-145000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:29:03.276979    3410 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:29:03.277156    3410 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:29:03.277159    3410 out.go:309] Setting ErrFile to fd 2...
	I0918 12:29:03.277162    3410 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:29:03.277319    3410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:29:03.277452    3410 out.go:303] Setting JSON to false
	I0918 12:29:03.277467    3410 mustload.go:65] Loading cluster: multinode-145000
	I0918 12:29:03.277507    3410 notify.go:220] Checking for updates...
	I0918 12:29:03.277738    3410 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:29:03.277747    3410 status.go:255] checking status of multinode-145000 ...
	I0918 12:29:03.278476    3410 status.go:330] multinode-145000 host status = "Running" (err=<nil>)
	I0918 12:29:03.278485    3410 host.go:66] Checking if "multinode-145000" exists ...
	I0918 12:29:03.278595    3410 host.go:66] Checking if "multinode-145000" exists ...
	I0918 12:29:03.278707    3410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 12:29:03.278720    3410 sshutil.go:53] new ssh client: &{IP:192.168.105.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/id_rsa Username:docker}
	W0918 12:30:18.258346    3410 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.11:22: connect: operation timed out
	W0918 12:30:18.260895    3410 start.go:275] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:30:18.260904    3410 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	I0918 12:30:18.260907    3410 status.go:257] multinode-145000 status: &{Name:multinode-145000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0918 12:30:18.260915    3410 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	I0918 12:30:18.260922    3410 status.go:255] checking status of multinode-145000-m02 ...
	I0918 12:30:18.261600    3410 status.go:330] multinode-145000-m02 host status = "Running" (err=<nil>)
	I0918 12:30:18.261605    3410 host.go:66] Checking if "multinode-145000-m02" exists ...
	I0918 12:30:18.261704    3410 host.go:66] Checking if "multinode-145000-m02" exists ...
	I0918 12:30:18.261822    3410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 12:30:18.261829    3410 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m02/id_rsa Username:docker}
	W0918 12:31:33.248518    3410 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.12:22: connect: operation timed out
	W0918 12:31:33.248559    3410 start.go:275] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0918 12:31:33.248567    3410 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	I0918 12:31:33.248572    3410 status.go:257] multinode-145000-m02 status: &{Name:multinode-145000-m02 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0918 12:31:33.248579    3410 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	I0918 12:31:33.248583    3410 status.go:255] checking status of multinode-145000-m03 ...
	I0918 12:31:33.248745    3410 status.go:330] multinode-145000-m03 host status = "Stopped" (err=<nil>)
	I0918 12:31:33.248749    3410 status.go:343] host is not running, skipping remaining checks
	I0918 12:31:33.248751    3410 status.go:257] multinode-145000-m03 status: &{Name:multinode-145000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr": multinode-145000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
multinode-145000-m02
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
multinode-145000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 3 (1m15.035229958s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:32:48.282618    3479 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:32:48.282627    3479 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopNode (378.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (230.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 node start m03 --alsologtostderr: exit status 80 (5.079327458s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-145000-m03 in cluster multinode-145000
	* Restarting existing qemu2 VM for "multinode-145000-m03" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-145000-m03" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:32:48.312047    3502 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:32:48.312276    3502 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:32:48.312282    3502 out.go:309] Setting ErrFile to fd 2...
	I0918 12:32:48.312284    3502 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:32:48.312432    3502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:32:48.312693    3502 mustload.go:65] Loading cluster: multinode-145000
	I0918 12:32:48.312908    3502 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	W0918 12:32:48.313120    3502 host.go:58] "multinode-145000-m03" host status: Stopped
	I0918 12:32:48.317458    3502 out.go:177] * Starting worker node multinode-145000-m03 in cluster multinode-145000
	I0918 12:32:48.320461    3502 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:32:48.320478    3502 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:32:48.320486    3502 cache.go:57] Caching tarball of preloaded images
	I0918 12:32:48.320549    3502 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:32:48.320554    3502 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:32:48.320604    3502 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/multinode-145000/config.json ...
	I0918 12:32:48.320881    3502 start.go:365] acquiring machines lock for multinode-145000-m03: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:32:48.320915    3502 start.go:369] acquired machines lock for "multinode-145000-m03" in 23.125µs
	I0918 12:32:48.320926    3502 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:32:48.320929    3502 fix.go:54] fixHost starting: m03
	I0918 12:32:48.321025    3502 fix.go:102] recreateIfNeeded on multinode-145000-m03: state=Stopped err=<nil>
	W0918 12:32:48.321030    3502 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:32:48.325421    3502 out.go:177] * Restarting existing qemu2 VM for "multinode-145000-m03" ...
	I0918 12:32:48.329542    3502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a1:16:27:ef:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/disk.qcow2
	I0918 12:32:48.331644    3502 main.go:141] libmachine: STDOUT: 
	I0918 12:32:48.331660    3502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:32:48.331683    3502 fix.go:56] fixHost completed within 10.7525ms
	I0918 12:32:48.331687    3502 start.go:83] releasing machines lock for "multinode-145000-m03", held for 10.768542ms
	W0918 12:32:48.331694    3502 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:32:48.331718    3502 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:32:48.331721    3502 start.go:703] Will try again in 5 seconds ...
	I0918 12:32:53.333689    3502 start.go:365] acquiring machines lock for multinode-145000-m03: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:32:53.333840    3502 start.go:369] acquired machines lock for "multinode-145000-m03" in 129.208µs
	I0918 12:32:53.333893    3502 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:32:53.333899    3502 fix.go:54] fixHost starting: m03
	I0918 12:32:53.334084    3502 fix.go:102] recreateIfNeeded on multinode-145000-m03: state=Stopped err=<nil>
	W0918 12:32:53.334089    3502 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:32:53.338115    3502 out.go:177] * Restarting existing qemu2 VM for "multinode-145000-m03" ...
	I0918 12:32:53.342260    3502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a1:16:27:ef:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/disk.qcow2
	I0918 12:32:53.344745    3502 main.go:141] libmachine: STDOUT: 
	I0918 12:32:53.344766    3502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:32:53.344788    3502 fix.go:56] fixHost completed within 10.889292ms
	I0918 12:32:53.344793    3502 start.go:83] releasing machines lock for "multinode-145000-m03", held for 10.947875ms
	W0918 12:32:53.344831    3502 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:32:53.349220    3502 out.go:177] 
	W0918 12:32:53.353254    3502 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:32:53.353259    3502 out.go:239] * 
	* 
	W0918 12:32:53.355199    3502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:32:53.359264    3502 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0918 12:32:48.312047    3502 out.go:296] Setting OutFile to fd 1 ...
I0918 12:32:48.312276    3502 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:32:48.312282    3502 out.go:309] Setting ErrFile to fd 2...
I0918 12:32:48.312284    3502 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:32:48.312432    3502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
I0918 12:32:48.312693    3502 mustload.go:65] Loading cluster: multinode-145000
I0918 12:32:48.312908    3502 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
W0918 12:32:48.313120    3502 host.go:58] "multinode-145000-m03" host status: Stopped
I0918 12:32:48.317458    3502 out.go:177] * Starting worker node multinode-145000-m03 in cluster multinode-145000
I0918 12:32:48.320461    3502 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0918 12:32:48.320478    3502 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
I0918 12:32:48.320486    3502 cache.go:57] Caching tarball of preloaded images
I0918 12:32:48.320549    3502 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0918 12:32:48.320554    3502 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
I0918 12:32:48.320604    3502 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/multinode-145000/config.json ...
I0918 12:32:48.320881    3502 start.go:365] acquiring machines lock for multinode-145000-m03: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0918 12:32:48.320915    3502 start.go:369] acquired machines lock for "multinode-145000-m03" in 23.125µs
I0918 12:32:48.320926    3502 start.go:96] Skipping create...Using existing machine configuration
I0918 12:32:48.320929    3502 fix.go:54] fixHost starting: m03
I0918 12:32:48.321025    3502 fix.go:102] recreateIfNeeded on multinode-145000-m03: state=Stopped err=<nil>
W0918 12:32:48.321030    3502 fix.go:128] unexpected machine state, will restart: <nil>
I0918 12:32:48.325421    3502 out.go:177] * Restarting existing qemu2 VM for "multinode-145000-m03" ...
I0918 12:32:48.329542    3502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a1:16:27:ef:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/disk.qcow2
I0918 12:32:48.331644    3502 main.go:141] libmachine: STDOUT: 
I0918 12:32:48.331660    3502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0918 12:32:48.331683    3502 fix.go:56] fixHost completed within 10.7525ms
I0918 12:32:48.331687    3502 start.go:83] releasing machines lock for "multinode-145000-m03", held for 10.768542ms
W0918 12:32:48.331694    3502 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0918 12:32:48.331718    3502 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0918 12:32:48.331721    3502 start.go:703] Will try again in 5 seconds ...
I0918 12:32:53.333689    3502 start.go:365] acquiring machines lock for multinode-145000-m03: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0918 12:32:53.333840    3502 start.go:369] acquired machines lock for "multinode-145000-m03" in 129.208µs
I0918 12:32:53.333893    3502 start.go:96] Skipping create...Using existing machine configuration
I0918 12:32:53.333899    3502 fix.go:54] fixHost starting: m03
I0918 12:32:53.334084    3502 fix.go:102] recreateIfNeeded on multinode-145000-m03: state=Stopped err=<nil>
W0918 12:32:53.334089    3502 fix.go:128] unexpected machine state, will restart: <nil>
I0918 12:32:53.338115    3502 out.go:177] * Restarting existing qemu2 VM for "multinode-145000-m03" ...
I0918 12:32:53.342260    3502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a1:16:27:ef:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000-m03/disk.qcow2
I0918 12:32:53.344745    3502 main.go:141] libmachine: STDOUT: 
I0918 12:32:53.344766    3502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0918 12:32:53.344788    3502 fix.go:56] fixHost completed within 10.889292ms
I0918 12:32:53.344793    3502 start.go:83] releasing machines lock for "multinode-145000-m03", held for 10.947875ms
W0918 12:32:53.344831    3502 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0918 12:32:53.349220    3502 out.go:177] 
W0918 12:32:53.353254    3502 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0918 12:32:53.353259    3502 out.go:239] * 
* 
W0918 12:32:53.355199    3502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0918 12:32:53.359264    3502 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-145000 node start m03 --alsologtostderr": exit status 80
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status
E0918 12:33:38.756450    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:34:00.240934    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 status: exit status 7 (2m30.037219542s)

                                                
                                                
-- stdout --
	multinode-145000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-145000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-145000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:34:08.394246    3506 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:34:08.394268    3506 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:35:23.396245    3506 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0918 12:35:23.396263    3506 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-145000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
E0918 12:35:38.911987    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 3 (1m15.035841125s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 12:36:38.431013    3548 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0918 12:36:38.431034    3548 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StartAfterStop (230.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (41.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-145000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-145000
E0918 12:37:01.983102    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-145000: (36.164589s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-145000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-145000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.221649792s)

                                                
                                                
-- stdout --
	* [multinode-145000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-145000 in cluster multinode-145000
	* Restarting existing qemu2 VM for "multinode-145000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-145000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:37:14.685436    3579 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:37:14.685689    3579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:14.685694    3579 out.go:309] Setting ErrFile to fd 2...
	I0918 12:37:14.685698    3579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:14.685923    3579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:37:14.687247    3579 out.go:303] Setting JSON to false
	I0918 12:37:14.707138    3579 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4008,"bootTime":1695061826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:37:14.707215    3579 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:14.711478    3579 out.go:177] * [multinode-145000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:37:14.719285    3579 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:37:14.723173    3579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:37:14.719343    3579 notify.go:220] Checking for updates...
	I0918 12:37:14.729316    3579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:14.732332    3579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:14.735339    3579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:37:14.738308    3579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:37:14.741681    3579 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:37:14.741730    3579 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:37:14.746265    3579 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:37:14.753317    3579 start.go:298] selected driver: qemu2
	I0918 12:37:14.753323    3579 start.go:902] validating driver "qemu2" against &{Name:multinode-145000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-145000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false ina
ccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:37:14.753397    3579 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:37:14.755860    3579 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:37:14.755887    3579 cni.go:84] Creating CNI manager for ""
	I0918 12:37:14.755892    3579 cni.go:136] 3 nodes found, recommending kindnet
	I0918 12:37:14.755898    3579 start_flags.go:321] config:
	{Name:multinode-145000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-145000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:37:14.760860    3579 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:14.769314    3579 out.go:177] * Starting control plane node multinode-145000 in cluster multinode-145000
	I0918 12:37:14.773244    3579 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:37:14.773261    3579 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:37:14.773270    3579 cache.go:57] Caching tarball of preloaded images
	I0918 12:37:14.773322    3579 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:37:14.773327    3579 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:37:14.773391    3579 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/multinode-145000/config.json ...
	I0918 12:37:14.773759    3579 start.go:365] acquiring machines lock for multinode-145000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:14.773791    3579 start.go:369] acquired machines lock for "multinode-145000" in 25.833µs
	I0918 12:37:14.773802    3579 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:37:14.773806    3579 fix.go:54] fixHost starting: 
	I0918 12:37:14.773923    3579 fix.go:102] recreateIfNeeded on multinode-145000: state=Stopped err=<nil>
	W0918 12:37:14.773931    3579 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:37:14.778382    3579 out.go:177] * Restarting existing qemu2 VM for "multinode-145000" ...
	I0918 12:37:14.786321    3579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6d:a3:c1:61:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/disk.qcow2
	I0918 12:37:14.788376    3579 main.go:141] libmachine: STDOUT: 
	I0918 12:37:14.788396    3579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:37:14.788423    3579 fix.go:56] fixHost completed within 14.616167ms
	I0918 12:37:14.788428    3579 start.go:83] releasing machines lock for "multinode-145000", held for 14.633375ms
	W0918 12:37:14.788438    3579 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:37:14.788473    3579 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:14.788477    3579 start.go:703] Will try again in 5 seconds ...
	I0918 12:37:19.790054    3579 start.go:365] acquiring machines lock for multinode-145000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:19.790630    3579 start.go:369] acquired machines lock for "multinode-145000" in 399.208µs
	I0918 12:37:19.790897    3579 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:37:19.790919    3579 fix.go:54] fixHost starting: 
	I0918 12:37:19.791706    3579 fix.go:102] recreateIfNeeded on multinode-145000: state=Stopped err=<nil>
	W0918 12:37:19.791736    3579 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:37:19.801312    3579 out.go:177] * Restarting existing qemu2 VM for "multinode-145000" ...
	I0918 12:37:19.805554    3579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6d:a3:c1:61:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/disk.qcow2
	I0918 12:37:19.814196    3579 main.go:141] libmachine: STDOUT: 
	I0918 12:37:19.814255    3579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:37:19.814339    3579 fix.go:56] fixHost completed within 23.420042ms
	I0918 12:37:19.814362    3579 start.go:83] releasing machines lock for "multinode-145000", held for 23.683083ms
	W0918 12:37:19.814623    3579 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:19.822313    3579 out.go:177] 
	W0918 12:37:19.826393    3579 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:37:19.826414    3579 out.go:239] * 
	* 
	W0918 12:37:19.829216    3579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:37:19.836203    3579 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-145000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-145000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 7 (31.8015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (41.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 node delete m03: exit status 89 (38.471667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-145000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-145000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr: exit status 7 (27.78825ms)

                                                
                                                
-- stdout --
	multinode-145000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-145000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-145000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:37:20.012058    3596 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:37:20.012219    3596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:20.012222    3596 out.go:309] Setting ErrFile to fd 2...
	I0918 12:37:20.012224    3596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:20.012363    3596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:37:20.012500    3596 out.go:303] Setting JSON to false
	I0918 12:37:20.012512    3596 mustload.go:65] Loading cluster: multinode-145000
	I0918 12:37:20.012565    3596 notify.go:220] Checking for updates...
	I0918 12:37:20.012732    3596 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:37:20.012736    3596 status.go:255] checking status of multinode-145000 ...
	I0918 12:37:20.012942    3596 status.go:330] multinode-145000 host status = "Stopped" (err=<nil>)
	I0918 12:37:20.012945    3596 status.go:343] host is not running, skipping remaining checks
	I0918 12:37:20.012947    3596 status.go:257] multinode-145000 status: &{Name:multinode-145000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 12:37:20.012959    3596 status.go:255] checking status of multinode-145000-m02 ...
	I0918 12:37:20.013062    3596 status.go:330] multinode-145000-m02 host status = "Stopped" (err=<nil>)
	I0918 12:37:20.013065    3596 status.go:343] host is not running, skipping remaining checks
	I0918 12:37:20.013067    3596 status.go:257] multinode-145000-m02 status: &{Name:multinode-145000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 12:37:20.013071    3596 status.go:255] checking status of multinode-145000-m03 ...
	I0918 12:37:20.013160    3596 status.go:330] multinode-145000-m03 host status = "Stopped" (err=<nil>)
	I0918 12:37:20.013164    3596 status.go:343] host is not running, skipping remaining checks
	I0918 12:37:20.013165    3596 status.go:257] multinode-145000-m03 status: &{Name:multinode-145000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 7 (28.736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 status: exit status 7 (29.739333ms)

                                                
                                                
-- stdout --
	multinode-145000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-145000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-145000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr: exit status 7 (27.817042ms)

                                                
                                                
-- stdout --
	multinode-145000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-145000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-145000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:37:20.181907    3604 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:37:20.182070    3604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:20.182074    3604 out.go:309] Setting ErrFile to fd 2...
	I0918 12:37:20.182076    3604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:20.182215    3604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:37:20.182355    3604 out.go:303] Setting JSON to false
	I0918 12:37:20.182366    3604 mustload.go:65] Loading cluster: multinode-145000
	I0918 12:37:20.182438    3604 notify.go:220] Checking for updates...
	I0918 12:37:20.182595    3604 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:37:20.182601    3604 status.go:255] checking status of multinode-145000 ...
	I0918 12:37:20.182805    3604 status.go:330] multinode-145000 host status = "Stopped" (err=<nil>)
	I0918 12:37:20.182808    3604 status.go:343] host is not running, skipping remaining checks
	I0918 12:37:20.182810    3604 status.go:257] multinode-145000 status: &{Name:multinode-145000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 12:37:20.182819    3604 status.go:255] checking status of multinode-145000-m02 ...
	I0918 12:37:20.182919    3604 status.go:330] multinode-145000-m02 host status = "Stopped" (err=<nil>)
	I0918 12:37:20.182922    3604 status.go:343] host is not running, skipping remaining checks
	I0918 12:37:20.182924    3604 status.go:257] multinode-145000-m02 status: &{Name:multinode-145000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 12:37:20.182927    3604 status.go:255] checking status of multinode-145000-m03 ...
	I0918 12:37:20.183017    3604 status.go:330] multinode-145000-m03 host status = "Stopped" (err=<nil>)
	I0918 12:37:20.183019    3604 status.go:343] host is not running, skipping remaining checks
	I0918 12:37:20.183021    3604 status.go:257] multinode-145000-m03 status: &{Name:multinode-145000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr": multinode-145000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-145000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-145000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr": multinode-145000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-145000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-145000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 7 (27.303834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-145000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-145000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.170994959s)

                                                
                                                
-- stdout --
	* [multinode-145000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-145000 in cluster multinode-145000
	* Restarting existing qemu2 VM for "multinode-145000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-145000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:37:20.236919    3608 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:37:20.237048    3608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:20.237051    3608 out.go:309] Setting ErrFile to fd 2...
	I0918 12:37:20.237053    3608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:20.237184    3608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:37:20.238180    3608 out.go:303] Setting JSON to false
	I0918 12:37:20.253306    3608 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4014,"bootTime":1695061826,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:37:20.253389    3608 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:20.257370    3608 out.go:177] * [multinode-145000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:37:20.264360    3608 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:37:20.267320    3608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:37:20.264434    3608 notify.go:220] Checking for updates...
	I0918 12:37:20.270331    3608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:20.273353    3608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:20.276303    3608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:37:20.279313    3608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:37:20.282710    3608 config.go:182] Loaded profile config "multinode-145000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:37:20.282989    3608 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:37:20.286286    3608 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:37:20.293345    3608 start.go:298] selected driver: qemu2
	I0918 12:37:20.293350    3608 start.go:902] validating driver "qemu2" against &{Name:multinode-145000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:multinode-145000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:fal
se inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:37:20.293433    3608 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:37:20.295287    3608 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:37:20.295310    3608 cni.go:84] Creating CNI manager for ""
	I0918 12:37:20.295314    3608 cni.go:136] 3 nodes found, recommending kindnet
	I0918 12:37:20.295320    3608 start_flags.go:321] config:
	{Name:multinode-145000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-145000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:f
alse istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clien
t SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:37:20.299291    3608 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:20.307318    3608 out.go:177] * Starting control plane node multinode-145000 in cluster multinode-145000
	I0918 12:37:20.311349    3608 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:37:20.311370    3608 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:37:20.311378    3608 cache.go:57] Caching tarball of preloaded images
	I0918 12:37:20.311463    3608 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:37:20.311468    3608 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:37:20.311541    3608 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/multinode-145000/config.json ...
	I0918 12:37:20.311894    3608 start.go:365] acquiring machines lock for multinode-145000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:20.311920    3608 start.go:369] acquired machines lock for "multinode-145000" in 19.291µs
	I0918 12:37:20.311930    3608 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:37:20.311933    3608 fix.go:54] fixHost starting: 
	I0918 12:37:20.312054    3608 fix.go:102] recreateIfNeeded on multinode-145000: state=Stopped err=<nil>
	W0918 12:37:20.312061    3608 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:37:20.316314    3608 out.go:177] * Restarting existing qemu2 VM for "multinode-145000" ...
	I0918 12:37:20.324369    3608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6d:a3:c1:61:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/disk.qcow2
	I0918 12:37:20.326263    3608 main.go:141] libmachine: STDOUT: 
	I0918 12:37:20.326276    3608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:37:20.326301    3608 fix.go:56] fixHost completed within 14.3665ms
	I0918 12:37:20.326305    3608 start.go:83] releasing machines lock for "multinode-145000", held for 14.382ms
	W0918 12:37:20.326312    3608 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:37:20.326345    3608 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:20.326350    3608 start.go:703] Will try again in 5 seconds ...
	I0918 12:37:25.328462    3608 start.go:365] acquiring machines lock for multinode-145000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:25.328873    3608 start.go:369] acquired machines lock for "multinode-145000" in 330.291µs
	I0918 12:37:25.329013    3608 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:37:25.329034    3608 fix.go:54] fixHost starting: 
	I0918 12:37:25.329753    3608 fix.go:102] recreateIfNeeded on multinode-145000: state=Stopped err=<nil>
	W0918 12:37:25.329780    3608 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:37:25.335200    3608 out.go:177] * Restarting existing qemu2 VM for "multinode-145000" ...
	I0918 12:37:25.342298    3608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6d:a3:c1:61:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/multinode-145000/disk.qcow2
	I0918 12:37:25.350703    3608 main.go:141] libmachine: STDOUT: 
	I0918 12:37:25.350827    3608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:37:25.350882    3608 fix.go:56] fixHost completed within 21.848834ms
	I0918 12:37:25.350903    3608 start.go:83] releasing machines lock for "multinode-145000", held for 22.006375ms
	W0918 12:37:25.351050    3608 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-145000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:25.358224    3608 out.go:177] 
	W0918 12:37:25.362228    3608 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:37:25.362271    3608 out.go:239] * 
	* 
	W0918 12:37:25.364680    3608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:37:25.371160    3608 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-145000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 7 (67.921584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (10.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-145000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-145000-m03 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-145000-m03 --driver=qemu2 : exit status 14 (94.194875ms)

                                                
                                                
-- stdout --
	* [multinode-145000-m03] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-145000-m03' is duplicated with machine name 'multinode-145000-m03' in profile 'multinode-145000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-145000-m04 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-145000-m04 --driver=qemu2 : exit status 80 (10.377373917s)

                                                
                                                
-- stdout --
	* [multinode-145000-m04] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-145000-m04 in cluster multinode-145000-m04
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-145000-m04" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-145000-m04" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-145000-m04 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-145000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-145000: exit status 89 (78.477625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-145000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-145000-m04
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-145000 -n multinode-145000: exit status 7 (28.284459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-145000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (10.72s)

                                                
                                    
x
+
TestPreload (10.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-778000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-778000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.890998292s)

                                                
                                                
-- stdout --
	* [test-preload-778000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-778000 in cluster test-preload-778000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-778000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:37:36.395315    3652 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:37:36.395449    3652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:36.395452    3652 out.go:309] Setting ErrFile to fd 2...
	I0918 12:37:36.395454    3652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:36.395596    3652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:37:36.396641    3652 out.go:303] Setting JSON to false
	I0918 12:37:36.411693    3652 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4030,"bootTime":1695061826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:37:36.411767    3652 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:36.416976    3652 out.go:177] * [test-preload-778000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:37:36.424891    3652 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:37:36.424960    3652 notify.go:220] Checking for updates...
	I0918 12:37:36.428806    3652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:37:36.431836    3652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:36.434833    3652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:36.436110    3652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:37:36.438850    3652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:37:36.442059    3652 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:37:36.445798    3652 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:37:36.452836    3652 start.go:298] selected driver: qemu2
	I0918 12:37:36.452843    3652 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:37:36.452849    3652 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:37:36.454860    3652 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:37:36.458811    3652 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:37:36.461889    3652 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:37:36.461910    3652 cni.go:84] Creating CNI manager for ""
	I0918 12:37:36.461918    3652 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:37:36.461925    3652 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:37:36.461929    3652 start_flags.go:321] config:
	{Name:test-preload-778000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:37:36.466099    3652 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.473850    3652 out.go:177] * Starting control plane node test-preload-778000 in cluster test-preload-778000
	I0918 12:37:36.477859    3652 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0918 12:37:36.477956    3652 cache.go:107] acquiring lock: {Name:mk66aa807de4a41bb93b7968a361b55b7b9dc442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.477958    3652 cache.go:107] acquiring lock: {Name:mk32e17790f35bbfbebafbac43a3734a9b6981d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.477963    3652 cache.go:107] acquiring lock: {Name:mk12df1509619cd6daa98d8c0f57cd08b93d2135 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.478139    3652 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 12:37:36.478140    3652 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 12:37:36.478173    3652 cache.go:107] acquiring lock: {Name:mk7abe688a59178e1156e3079e4595f2d0c90713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.478196    3652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/test-preload-778000/config.json ...
	I0918 12:37:36.478200    3652 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:37:36.478228    3652 cache.go:107] acquiring lock: {Name:mkec071cffd568d5ebf71e073b84241798fa0cad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.478209    3652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/test-preload-778000/config.json: {Name:mk1cb324ff6750c70a92d7bcb73538a7a19240ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:37:36.478218    3652 cache.go:107] acquiring lock: {Name:mkdeeab2f97f9430d9982811c3f6367c366933f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.478297    3652 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0918 12:37:36.478316    3652 cache.go:107] acquiring lock: {Name:mk8e8df14461ae19915ffb8d1c3d73973253e53b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.478369    3652 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 12:37:36.478315    3652 cache.go:107] acquiring lock: {Name:mkfb4b9b16adbf6a0c3fcc417b14ae7ac76793d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:36.478413    3652 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 12:37:36.478467    3652 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 12:37:36.478563    3652 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 12:37:36.478676    3652 start.go:365] acquiring machines lock for test-preload-778000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:36.478710    3652 start.go:369] acquired machines lock for "test-preload-778000" in 28.5µs
	I0918 12:37:36.478725    3652 start.go:93] Provisioning new machine with config: &{Name:test-preload-778000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:37:36.478758    3652 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:37:36.486756    3652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:37:36.492504    3652 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 12:37:36.493099    3652 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 12:37:36.493182    3652 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:37:36.493613    3652 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 12:37:36.493734    3652 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 12:37:36.493819    3652 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 12:37:36.493936    3652 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 12:37:36.494092    3652 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 12:37:36.502988    3652 start.go:159] libmachine.API.Create for "test-preload-778000" (driver="qemu2")
	I0918 12:37:36.503007    3652 client.go:168] LocalClient.Create starting
	I0918 12:37:36.503077    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:37:36.503106    3652 main.go:141] libmachine: Decoding PEM data...
	I0918 12:37:36.503116    3652 main.go:141] libmachine: Parsing certificate...
	I0918 12:37:36.503156    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:37:36.503175    3652 main.go:141] libmachine: Decoding PEM data...
	I0918 12:37:36.503183    3652 main.go:141] libmachine: Parsing certificate...
	I0918 12:37:36.503493    3652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:37:36.632510    3652 main.go:141] libmachine: Creating SSH key...
	I0918 12:37:36.821603    3652 main.go:141] libmachine: Creating Disk image...
	I0918 12:37:36.821613    3652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:37:36.821750    3652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2
	I0918 12:37:36.829989    3652 main.go:141] libmachine: STDOUT: 
	I0918 12:37:36.830006    3652 main.go:141] libmachine: STDERR: 
	I0918 12:37:36.830071    3652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2 +20000M
	I0918 12:37:36.837487    3652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:37:36.837501    3652 main.go:141] libmachine: STDERR: 
	I0918 12:37:36.837520    3652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2
	I0918 12:37:36.837529    3652 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:37:36.837575    3652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:1e:f7:6b:b3:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2
	I0918 12:37:36.839189    3652 main.go:141] libmachine: STDOUT: 
	I0918 12:37:36.839203    3652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:37:36.839223    3652 client.go:171] LocalClient.Create took 336.218042ms
	I0918 12:37:37.354269    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0918 12:37:37.400621    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0918 12:37:37.784474    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0918 12:37:37.954853    3652 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 12:37:37.954878    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0918 12:37:38.070543    3652 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0918 12:37:38.070571    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 12:37:38.177974    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 12:37:38.177992    3652 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.700071459s
	I0918 12:37:38.178000    3652 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 12:37:38.196828    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0918 12:37:38.395268    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0918 12:37:38.691644    3652 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0918 12:37:38.828307    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0918 12:37:38.828374    3652 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.350270917s
	I0918 12:37:38.828405    3652 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0918 12:37:38.839459    3652 start.go:128] duration metric: createHost completed in 2.360730459s
	I0918 12:37:38.839496    3652 start.go:83] releasing machines lock for "test-preload-778000", held for 2.360820584s
	W0918 12:37:38.839548    3652 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:38.852826    3652 out.go:177] * Deleting "test-preload-778000" in qemu2 ...
	W0918 12:37:38.872290    3652 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:38.872331    3652 start.go:703] Will try again in 5 seconds ...
	I0918 12:37:40.178408    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0918 12:37:40.178457    3652 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.700292s
	I0918 12:37:40.178493    3652 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0918 12:37:40.782234    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0918 12:37:40.782305    3652 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.3041515s
	I0918 12:37:40.782339    3652 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0918 12:37:41.276395    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0918 12:37:41.276470    3652 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.798597s
	I0918 12:37:41.276501    3652 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0918 12:37:41.800828    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0918 12:37:41.800874    3652 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.32270425s
	I0918 12:37:41.800908    3652 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0918 12:37:43.501503    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0918 12:37:43.501557    3652 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.023735208s
	I0918 12:37:43.501590    3652 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0918 12:37:43.872540    3652 start.go:365] acquiring machines lock for test-preload-778000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:43.872973    3652 start.go:369] acquired machines lock for "test-preload-778000" in 361.583µs
	I0918 12:37:43.873091    3652 start.go:93] Provisioning new machine with config: &{Name:test-preload-778000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:37:43.873376    3652 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:37:43.881948    3652 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:37:43.928355    3652 start.go:159] libmachine.API.Create for "test-preload-778000" (driver="qemu2")
	I0918 12:37:43.928401    3652 client.go:168] LocalClient.Create starting
	I0918 12:37:43.928555    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:37:43.928614    3652 main.go:141] libmachine: Decoding PEM data...
	I0918 12:37:43.928644    3652 main.go:141] libmachine: Parsing certificate...
	I0918 12:37:43.928707    3652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:37:43.928743    3652 main.go:141] libmachine: Decoding PEM data...
	I0918 12:37:43.928760    3652 main.go:141] libmachine: Parsing certificate...
	I0918 12:37:43.929257    3652 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:37:44.057901    3652 main.go:141] libmachine: Creating SSH key...
	I0918 12:37:44.202849    3652 main.go:141] libmachine: Creating Disk image...
	I0918 12:37:44.202859    3652 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:37:44.203001    3652 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2
	I0918 12:37:44.211563    3652 main.go:141] libmachine: STDOUT: 
	I0918 12:37:44.211586    3652 main.go:141] libmachine: STDERR: 
	I0918 12:37:44.211656    3652 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2 +20000M
	I0918 12:37:44.219096    3652 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:37:44.219110    3652 main.go:141] libmachine: STDERR: 
	I0918 12:37:44.219127    3652 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2
	I0918 12:37:44.219137    3652 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:37:44.219179    3652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:0e:db:1b:90:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/test-preload-778000/disk.qcow2
	I0918 12:37:44.220822    3652 main.go:141] libmachine: STDOUT: 
	I0918 12:37:44.220835    3652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:37:44.220850    3652 client.go:171] LocalClient.Create took 292.437625ms
	I0918 12:37:44.655868    3652 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0918 12:37:44.655947    3652 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.177875125s
	I0918 12:37:44.655972    3652 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0918 12:37:44.656047    3652 cache.go:87] Successfully saved all images to host disk.
	I0918 12:37:46.223106    3652 start.go:128] duration metric: createHost completed in 2.349679625s
	I0918 12:37:46.223175    3652 start.go:83] releasing machines lock for "test-preload-778000", held for 2.350223541s
	W0918 12:37:46.223525    3652 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-778000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-778000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:37:46.231907    3652 out.go:177] 
	W0918 12:37:46.236088    3652 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:37:46.236113    3652 out.go:239] * 
	* 
	W0918 12:37:46.238766    3652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:37:46.246060    3652 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-778000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-09-18 12:37:46.262787 -0700 PDT m=+2762.738941292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-778000 -n test-preload-778000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-778000 -n test-preload-778000: exit status 7 (63.626083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-778000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-778000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-778000
--- FAIL: TestPreload (10.06s)

                                                
                                    
x
+
TestScheduledStopUnix (9.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-424000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-424000 --memory=2048 --driver=qemu2 : exit status 80 (9.600275958s)

                                                
                                                
-- stdout --
	* [scheduled-stop-424000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-424000 in cluster scheduled-stop-424000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-424000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-424000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-424000 in cluster scheduled-stop-424000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-424000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-09-18 12:37:56.027381 -0700 PDT m=+2772.503718584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-424000 -n scheduled-stop-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-424000 -n scheduled-stop-424000: exit status 7 (65.162375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-424000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-424000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-424000
--- FAIL: TestScheduledStopUnix (9.77s)

                                                
                                    
x
+
TestSkaffold (12.07s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3644723705 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-492000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-492000 --memory=2600 --driver=qemu2 : exit status 80 (9.818357917s)

                                                
                                                
-- stdout --
	* [skaffold-492000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-492000 in cluster skaffold-492000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-492000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-492000 in cluster skaffold-492000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-09-18 12:38:08.106164 -0700 PDT m=+2784.582727626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-492000 -n skaffold-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-492000 -n skaffold-492000: exit status 7 (60.360833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-492000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-492000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-492000
--- FAIL: TestSkaffold (12.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (169.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-18 12:41:37.247541 -0700 PDT m=+2993.728024876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-175000 -n running-upgrade-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-175000 -n running-upgrade-175000: exit status 85 (80.366916ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-175000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-175000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-175000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-175000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-175000\"")
helpers_test.go:175: Cleaning up "running-upgrade-175000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-175000
--- FAIL: TestRunningBinaryUpgrade (169.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-981000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-981000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.828813333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-981000 in cluster kubernetes-upgrade-981000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-981000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:41:37.590652    4182 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:41:37.590782    4182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:41:37.590785    4182 out.go:309] Setting ErrFile to fd 2...
	I0918 12:41:37.590788    4182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:41:37.590911    4182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:41:37.592003    4182 out.go:303] Setting JSON to false
	I0918 12:41:37.607362    4182 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4271,"bootTime":1695061826,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:41:37.607425    4182 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:41:37.611039    4182 out.go:177] * [kubernetes-upgrade-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:41:37.621018    4182 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:41:37.625022    4182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:41:37.621065    4182 notify.go:220] Checking for updates...
	I0918 12:41:37.632980    4182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:41:37.636022    4182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:41:37.638914    4182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:41:37.641989    4182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:41:37.645315    4182 config.go:182] Loaded profile config "cert-expiration-336000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:41:37.645375    4182 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:41:37.649945    4182 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:41:37.656992    4182 start.go:298] selected driver: qemu2
	I0918 12:41:37.656999    4182 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:41:37.657005    4182 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:41:37.659026    4182 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:41:37.662024    4182 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:41:37.665035    4182 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:41:37.665053    4182 cni.go:84] Creating CNI manager for ""
	I0918 12:41:37.665059    4182 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:41:37.665063    4182 start_flags.go:321] config:
	{Name:kubernetes-upgrade-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:41:37.669266    4182 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:41:37.675965    4182 out.go:177] * Starting control plane node kubernetes-upgrade-981000 in cluster kubernetes-upgrade-981000
	I0918 12:41:37.679957    4182 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 12:41:37.679979    4182 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 12:41:37.679991    4182 cache.go:57] Caching tarball of preloaded images
	I0918 12:41:37.680062    4182 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:41:37.680067    4182 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0918 12:41:37.680151    4182 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kubernetes-upgrade-981000/config.json ...
	I0918 12:41:37.680165    4182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kubernetes-upgrade-981000/config.json: {Name:mk686991f4fcea9d19cac545ccfc266ae0e3840d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:41:37.680387    4182 start.go:365] acquiring machines lock for kubernetes-upgrade-981000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:41:37.680419    4182 start.go:369] acquired machines lock for "kubernetes-upgrade-981000" in 23.5µs
	I0918 12:41:37.680434    4182 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:41:37.680465    4182 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:41:37.684960    4182 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:41:37.701501    4182 start.go:159] libmachine.API.Create for "kubernetes-upgrade-981000" (driver="qemu2")
	I0918 12:41:37.701526    4182 client.go:168] LocalClient.Create starting
	I0918 12:41:37.701583    4182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:41:37.701617    4182 main.go:141] libmachine: Decoding PEM data...
	I0918 12:41:37.701627    4182 main.go:141] libmachine: Parsing certificate...
	I0918 12:41:37.701669    4182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:41:37.701689    4182 main.go:141] libmachine: Decoding PEM data...
	I0918 12:41:37.701697    4182 main.go:141] libmachine: Parsing certificate...
	I0918 12:41:37.702008    4182 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:41:37.830217    4182 main.go:141] libmachine: Creating SSH key...
	I0918 12:41:38.006680    4182 main.go:141] libmachine: Creating Disk image...
	I0918 12:41:38.006688    4182 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:41:38.006860    4182 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:38.015517    4182 main.go:141] libmachine: STDOUT: 
	I0918 12:41:38.015533    4182 main.go:141] libmachine: STDERR: 
	I0918 12:41:38.015582    4182 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2 +20000M
	I0918 12:41:38.022710    4182 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:41:38.022730    4182 main.go:141] libmachine: STDERR: 
	I0918 12:41:38.022750    4182 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:38.022758    4182 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:41:38.022803    4182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:8f:88:b5:11:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:38.024383    4182 main.go:141] libmachine: STDOUT: 
	I0918 12:41:38.024398    4182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:41:38.024418    4182 client.go:171] LocalClient.Create took 322.891666ms
	I0918 12:41:40.026613    4182 start.go:128] duration metric: createHost completed in 2.346158291s
	I0918 12:41:40.026719    4182 start.go:83] releasing machines lock for "kubernetes-upgrade-981000", held for 2.346331666s
	W0918 12:41:40.026795    4182 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:41:40.036113    4182 out.go:177] * Deleting "kubernetes-upgrade-981000" in qemu2 ...
	W0918 12:41:40.061567    4182 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:41:40.061598    4182 start.go:703] Will try again in 5 seconds ...
	I0918 12:41:45.063737    4182 start.go:365] acquiring machines lock for kubernetes-upgrade-981000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:41:45.064139    4182 start.go:369] acquired machines lock for "kubernetes-upgrade-981000" in 292.208µs
	I0918 12:41:45.064248    4182 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:41:45.064505    4182 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:41:45.070033    4182 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:41:45.119037    4182 start.go:159] libmachine.API.Create for "kubernetes-upgrade-981000" (driver="qemu2")
	I0918 12:41:45.119086    4182 client.go:168] LocalClient.Create starting
	I0918 12:41:45.119193    4182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:41:45.119267    4182 main.go:141] libmachine: Decoding PEM data...
	I0918 12:41:45.119291    4182 main.go:141] libmachine: Parsing certificate...
	I0918 12:41:45.119354    4182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:41:45.119393    4182 main.go:141] libmachine: Decoding PEM data...
	I0918 12:41:45.119409    4182 main.go:141] libmachine: Parsing certificate...
	I0918 12:41:45.120057    4182 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:41:45.253781    4182 main.go:141] libmachine: Creating SSH key...
	I0918 12:41:45.336951    4182 main.go:141] libmachine: Creating Disk image...
	I0918 12:41:45.336957    4182 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:41:45.337097    4182 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:45.345584    4182 main.go:141] libmachine: STDOUT: 
	I0918 12:41:45.345605    4182 main.go:141] libmachine: STDERR: 
	I0918 12:41:45.345660    4182 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2 +20000M
	I0918 12:41:45.352724    4182 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:41:45.352748    4182 main.go:141] libmachine: STDERR: 
	I0918 12:41:45.352764    4182 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:45.352769    4182 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:41:45.352804    4182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:2b:71:3b:20:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:45.354277    4182 main.go:141] libmachine: STDOUT: 
	I0918 12:41:45.354290    4182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:41:45.354303    4182 client.go:171] LocalClient.Create took 235.216292ms
	I0918 12:41:47.356441    4182 start.go:128] duration metric: createHost completed in 2.29194075s
	I0918 12:41:47.356505    4182 start.go:83] releasing machines lock for "kubernetes-upgrade-981000", held for 2.292387625s
	W0918 12:41:47.357016    4182 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-981000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:41:47.364221    4182 out.go:177] 
	W0918 12:41:47.368831    4182 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:41:47.368884    4182 out.go:239] * 
	* 
	W0918 12:41:47.371352    4182 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:41:47.379757    4182 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-981000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-981000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-981000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-981000 status --format={{.Host}}: exit status 7 (32.112084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-981000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-981000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.177577125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-981000 in cluster kubernetes-upgrade-981000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-981000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:41:47.549988    4219 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:41:47.550115    4219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:41:47.550118    4219 out.go:309] Setting ErrFile to fd 2...
	I0918 12:41:47.550121    4219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:41:47.550254    4219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:41:47.551275    4219 out.go:303] Setting JSON to false
	I0918 12:41:47.566199    4219 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4281,"bootTime":1695061826,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:41:47.566268    4219 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:41:47.570700    4219 out.go:177] * [kubernetes-upgrade-981000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:41:47.577865    4219 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:41:47.581855    4219 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:41:47.577910    4219 notify.go:220] Checking for updates...
	I0918 12:41:47.587855    4219 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:41:47.590860    4219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:41:47.593868    4219 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:41:47.596850    4219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:41:47.600124    4219 config.go:182] Loaded profile config "kubernetes-upgrade-981000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0918 12:41:47.600385    4219 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:41:47.604815    4219 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:41:47.610791    4219 start.go:298] selected driver: qemu2
	I0918 12:41:47.610798    4219 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-981000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:41:47.610850    4219 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:41:47.612960    4219 cni.go:84] Creating CNI manager for ""
	I0918 12:41:47.612973    4219 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:41:47.612980    4219 start_flags.go:321] config:
	{Name:kubernetes-upgrade-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-981000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:41:47.617063    4219 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:41:47.623900    4219 out.go:177] * Starting control plane node kubernetes-upgrade-981000 in cluster kubernetes-upgrade-981000
	I0918 12:41:47.627837    4219 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:41:47.627853    4219 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:41:47.627861    4219 cache.go:57] Caching tarball of preloaded images
	I0918 12:41:47.627906    4219 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:41:47.627912    4219 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:41:47.627960    4219 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kubernetes-upgrade-981000/config.json ...
	I0918 12:41:47.628325    4219 start.go:365] acquiring machines lock for kubernetes-upgrade-981000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:41:47.628353    4219 start.go:369] acquired machines lock for "kubernetes-upgrade-981000" in 22.375µs
	I0918 12:41:47.628364    4219 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:41:47.628368    4219 fix.go:54] fixHost starting: 
	I0918 12:41:47.628480    4219 fix.go:102] recreateIfNeeded on kubernetes-upgrade-981000: state=Stopped err=<nil>
	W0918 12:41:47.628488    4219 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:41:47.636792    4219 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-981000" ...
	I0918 12:41:47.640882    4219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:2b:71:3b:20:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:47.642714    4219 main.go:141] libmachine: STDOUT: 
	I0918 12:41:47.642730    4219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:41:47.642761    4219 fix.go:56] fixHost completed within 14.391959ms
	I0918 12:41:47.642766    4219 start.go:83] releasing machines lock for "kubernetes-upgrade-981000", held for 14.408417ms
	W0918 12:41:47.642772    4219 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:41:47.642802    4219 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:41:47.642806    4219 start.go:703] Will try again in 5 seconds ...
	I0918 12:41:52.644930    4219 start.go:365] acquiring machines lock for kubernetes-upgrade-981000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:41:52.645372    4219 start.go:369] acquired machines lock for "kubernetes-upgrade-981000" in 340.667µs
	I0918 12:41:52.645483    4219 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:41:52.645504    4219 fix.go:54] fixHost starting: 
	I0918 12:41:52.646233    4219 fix.go:102] recreateIfNeeded on kubernetes-upgrade-981000: state=Stopped err=<nil>
	W0918 12:41:52.646258    4219 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:41:52.655684    4219 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-981000" ...
	I0918 12:41:52.658856    4219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:2b:71:3b:20:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubernetes-upgrade-981000/disk.qcow2
	I0918 12:41:52.667877    4219 main.go:141] libmachine: STDOUT: 
	I0918 12:41:52.667946    4219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:41:52.668036    4219 fix.go:56] fixHost completed within 22.533125ms
	I0918 12:41:52.668057    4219 start.go:83] releasing machines lock for "kubernetes-upgrade-981000", held for 22.664584ms
	W0918 12:41:52.668286    4219 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-981000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:41:52.676576    4219 out.go:177] 
	W0918 12:41:52.679685    4219 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:41:52.679708    4219 out.go:239] * 
	* 
	W0918 12:41:52.682301    4219 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:41:52.690634    4219 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-981000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-981000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-981000 version --output=json: exit status 1 (64.966834ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-981000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-09-18 12:41:52.769026 -0700 PDT m=+3009.249801167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-981000 -n kubernetes-upgrade-981000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-981000 -n kubernetes-upgrade-981000: exit status 7 (31.6125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-981000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-981000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-981000
--- FAIL: TestKubernetesUpgrade (15.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.69s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17263
- KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1181946746/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.85s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17263
- KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2100907623/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (176.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (176.32s)

                                                
                                    
x
+
TestPause/serial/Start (9.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-507000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-507000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.683541875s)

                                                
                                                
-- stdout --
	* [pause-507000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-507000 in cluster pause-507000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-507000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-507000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-507000 -n pause-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-507000 -n pause-507000: exit status 7 (66.239167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-507000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-592000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-592000 --driver=qemu2 : exit status 80 (9.787975708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-592000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-592000 in cluster NoKubernetes-592000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-592000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-592000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-592000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000: exit status 7 (66.561667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --driver=qemu2 : exit status 80 (5.242648s)

                                                
                                                
-- stdout --
	* [NoKubernetes-592000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-592000
	* Restarting existing qemu2 VM for "NoKubernetes-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-592000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000: exit status 7 (65.715375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --driver=qemu2 : exit status 80 (5.251216583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-592000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-592000
	* Restarting existing qemu2 VM for "NoKubernetes-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-592000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000: exit status 7 (66.91525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-592000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-592000 --driver=qemu2 : exit status 80 (5.235570041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-592000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-592000
	* Restarting existing qemu2 VM for "NoKubernetes-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-592000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-592000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-592000 -n NoKubernetes-592000: exit status 7 (61.613708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.961308417s)

                                                
                                                
-- stdout --
	* [auto-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-716000 in cluster auto-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:42:29.024105    4343 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:42:29.024238    4343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:42:29.024241    4343 out.go:309] Setting ErrFile to fd 2...
	I0918 12:42:29.024243    4343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:42:29.024368    4343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:42:29.025405    4343 out.go:303] Setting JSON to false
	I0918 12:42:29.040489    4343 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4323,"bootTime":1695061826,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:42:29.040571    4343 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:42:29.045496    4343 out.go:177] * [auto-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:42:29.052555    4343 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:42:29.052609    4343 notify.go:220] Checking for updates...
	I0918 12:42:29.056439    4343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:42:29.059485    4343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:42:29.062522    4343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:42:29.065468    4343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:42:29.068515    4343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:42:29.071687    4343 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:42:29.075489    4343 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:42:29.082486    4343 start.go:298] selected driver: qemu2
	I0918 12:42:29.082493    4343 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:42:29.082500    4343 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:42:29.084491    4343 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:42:29.087469    4343 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:42:29.090549    4343 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:42:29.090566    4343 cni.go:84] Creating CNI manager for ""
	I0918 12:42:29.090573    4343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:42:29.090577    4343 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:42:29.090582    4343 start_flags.go:321] config:
	{Name:auto-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0918 12:42:29.094705    4343 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:42:29.105440    4343 out.go:177] * Starting control plane node auto-716000 in cluster auto-716000
	I0918 12:42:29.109489    4343 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:42:29.109507    4343 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:42:29.109520    4343 cache.go:57] Caching tarball of preloaded images
	I0918 12:42:29.109575    4343 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:42:29.109581    4343 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:42:29.109796    4343 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/auto-716000/config.json ...
	I0918 12:42:29.109809    4343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/auto-716000/config.json: {Name:mk8c220e792564ba45472b65a36ea5bb5e32c550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:42:29.110021    4343 start.go:365] acquiring machines lock for auto-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:42:29.110054    4343 start.go:369] acquired machines lock for "auto-716000" in 27.583µs
	I0918 12:42:29.110065    4343 start.go:93] Provisioning new machine with config: &{Name:auto-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:42:29.110093    4343 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:42:29.117494    4343 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:42:29.133451    4343 start.go:159] libmachine.API.Create for "auto-716000" (driver="qemu2")
	I0918 12:42:29.133481    4343 client.go:168] LocalClient.Create starting
	I0918 12:42:29.133542    4343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:42:29.133573    4343 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:29.133590    4343 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:29.133625    4343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:42:29.133644    4343 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:29.133652    4343 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:29.133987    4343 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:42:29.247845    4343 main.go:141] libmachine: Creating SSH key...
	I0918 12:42:29.407455    4343 main.go:141] libmachine: Creating Disk image...
	I0918 12:42:29.407464    4343 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:42:29.407598    4343 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2
	I0918 12:42:29.416170    4343 main.go:141] libmachine: STDOUT: 
	I0918 12:42:29.416185    4343 main.go:141] libmachine: STDERR: 
	I0918 12:42:29.416237    4343 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2 +20000M
	I0918 12:42:29.423367    4343 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:42:29.423379    4343 main.go:141] libmachine: STDERR: 
	I0918 12:42:29.423398    4343 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2
	I0918 12:42:29.423404    4343 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:42:29.423437    4343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:fb:e0:4c:c7:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2
	I0918 12:42:29.424890    4343 main.go:141] libmachine: STDOUT: 
	I0918 12:42:29.424905    4343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:42:29.424923    4343 client.go:171] LocalClient.Create took 291.440709ms
	I0918 12:42:31.427202    4343 start.go:128] duration metric: createHost completed in 2.317105375s
	I0918 12:42:31.427272    4343 start.go:83] releasing machines lock for "auto-716000", held for 2.317251917s
	W0918 12:42:31.427421    4343 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:31.433718    4343 out.go:177] * Deleting "auto-716000" in qemu2 ...
	W0918 12:42:31.458340    4343 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:31.458368    4343 start.go:703] Will try again in 5 seconds ...
	I0918 12:42:36.460558    4343 start.go:365] acquiring machines lock for auto-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:42:36.460964    4343 start.go:369] acquired machines lock for "auto-716000" in 294.5µs
	I0918 12:42:36.461087    4343 start.go:93] Provisioning new machine with config: &{Name:auto-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:auto-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:42:36.461478    4343 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:42:36.469162    4343 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:42:36.514499    4343 start.go:159] libmachine.API.Create for "auto-716000" (driver="qemu2")
	I0918 12:42:36.514531    4343 client.go:168] LocalClient.Create starting
	I0918 12:42:36.514693    4343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:42:36.514763    4343 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:36.514786    4343 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:36.514857    4343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:42:36.514893    4343 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:36.514906    4343 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:36.515447    4343 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:42:36.640081    4343 main.go:141] libmachine: Creating SSH key...
	I0918 12:42:36.897190    4343 main.go:141] libmachine: Creating Disk image...
	I0918 12:42:36.897201    4343 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:42:36.897341    4343 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2
	I0918 12:42:36.905928    4343 main.go:141] libmachine: STDOUT: 
	I0918 12:42:36.905945    4343 main.go:141] libmachine: STDERR: 
	I0918 12:42:36.905999    4343 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2 +20000M
	I0918 12:42:36.913263    4343 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:42:36.913281    4343 main.go:141] libmachine: STDERR: 
	I0918 12:42:36.913293    4343 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2
	I0918 12:42:36.913299    4343 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:42:36.913337    4343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ef:ee:b1:c8:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/auto-716000/disk.qcow2
	I0918 12:42:36.914857    4343 main.go:141] libmachine: STDOUT: 
	I0918 12:42:36.914870    4343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:42:36.914885    4343 client.go:171] LocalClient.Create took 400.356416ms
	I0918 12:42:38.917050    4343 start.go:128] duration metric: createHost completed in 2.455544708s
	I0918 12:42:38.917178    4343 start.go:83] releasing machines lock for "auto-716000", held for 2.456237209s
	W0918 12:42:38.917589    4343 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:38.929236    4343 out.go:177] 
	W0918 12:42:38.933322    4343 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:42:38.933414    4343 out.go:239] * 
	* 
	W0918 12:42:38.936247    4343 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:42:38.946304    4343 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.948898917s)

                                                
                                                
-- stdout --
	* [kindnet-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-716000 in cluster kindnet-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:42:41.017615    4460 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:42:41.017717    4460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:42:41.017719    4460 out.go:309] Setting ErrFile to fd 2...
	I0918 12:42:41.017722    4460 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:42:41.017848    4460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:42:41.018866    4460 out.go:303] Setting JSON to false
	I0918 12:42:41.034215    4460 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4335,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:42:41.034289    4460 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:42:41.040036    4460 out.go:177] * [kindnet-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:42:41.047968    4460 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:42:41.051919    4460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:42:41.048031    4460 notify.go:220] Checking for updates...
	I0918 12:42:41.057853    4460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:42:41.060946    4460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:42:41.063931    4460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:42:41.066970    4460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:42:41.070109    4460 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:42:41.073956    4460 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:42:41.080949    4460 start.go:298] selected driver: qemu2
	I0918 12:42:41.080957    4460 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:42:41.080964    4460 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:42:41.083027    4460 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:42:41.085952    4460 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:42:41.088911    4460 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:42:41.088944    4460 cni.go:84] Creating CNI manager for "kindnet"
	I0918 12:42:41.088949    4460 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 12:42:41.088955    4460 start_flags.go:321] config:
	{Name:kindnet-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:42:41.093309    4460 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:42:41.100933    4460 out.go:177] * Starting control plane node kindnet-716000 in cluster kindnet-716000
	I0918 12:42:41.104889    4460 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:42:41.104909    4460 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:42:41.104919    4460 cache.go:57] Caching tarball of preloaded images
	I0918 12:42:41.104994    4460 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:42:41.105000    4460 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:42:41.105230    4460 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kindnet-716000/config.json ...
	I0918 12:42:41.105244    4460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kindnet-716000/config.json: {Name:mk8702261b1e2e3b582182d3ea2c21a886720e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:42:41.105465    4460 start.go:365] acquiring machines lock for kindnet-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:42:41.105497    4460 start.go:369] acquired machines lock for "kindnet-716000" in 26.042µs
	I0918 12:42:41.105509    4460 start.go:93] Provisioning new machine with config: &{Name:kindnet-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:42:41.105542    4460 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:42:41.113931    4460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:42:41.130371    4460 start.go:159] libmachine.API.Create for "kindnet-716000" (driver="qemu2")
	I0918 12:42:41.130393    4460 client.go:168] LocalClient.Create starting
	I0918 12:42:41.130447    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:42:41.130475    4460 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:41.130495    4460 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:41.130546    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:42:41.130570    4460 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:41.130579    4460 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:41.130893    4460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:42:41.244624    4460 main.go:141] libmachine: Creating SSH key...
	I0918 12:42:41.520687    4460 main.go:141] libmachine: Creating Disk image...
	I0918 12:42:41.520700    4460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:42:41.520886    4460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2
	I0918 12:42:41.530150    4460 main.go:141] libmachine: STDOUT: 
	I0918 12:42:41.530165    4460 main.go:141] libmachine: STDERR: 
	I0918 12:42:41.530213    4460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2 +20000M
	I0918 12:42:41.537463    4460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:42:41.537476    4460 main.go:141] libmachine: STDERR: 
	I0918 12:42:41.537500    4460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2
	I0918 12:42:41.537506    4460 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:42:41.537543    4460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:24:70:0f:65:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2
	I0918 12:42:41.539064    4460 main.go:141] libmachine: STDOUT: 
	I0918 12:42:41.539077    4460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:42:41.539095    4460 client.go:171] LocalClient.Create took 408.704333ms
	I0918 12:42:43.541224    4460 start.go:128] duration metric: createHost completed in 2.435709s
	I0918 12:42:43.541289    4460 start.go:83] releasing machines lock for "kindnet-716000", held for 2.435828167s
	W0918 12:42:43.541345    4460 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:43.553552    4460 out.go:177] * Deleting "kindnet-716000" in qemu2 ...
	W0918 12:42:43.572924    4460 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:43.572956    4460 start.go:703] Will try again in 5 seconds ...
	I0918 12:42:48.575163    4460 start.go:365] acquiring machines lock for kindnet-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:42:48.575635    4460 start.go:369] acquired machines lock for "kindnet-716000" in 357.042µs
	I0918 12:42:48.575752    4460 start.go:93] Provisioning new machine with config: &{Name:kindnet-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:42:48.576059    4460 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:42:48.583684    4460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:42:48.629517    4460 start.go:159] libmachine.API.Create for "kindnet-716000" (driver="qemu2")
	I0918 12:42:48.629552    4460 client.go:168] LocalClient.Create starting
	I0918 12:42:48.629675    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:42:48.629745    4460 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:48.629770    4460 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:48.629855    4460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:42:48.629896    4460 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:48.629914    4460 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:48.630402    4460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:42:48.757326    4460 main.go:141] libmachine: Creating SSH key...
	I0918 12:42:48.881609    4460 main.go:141] libmachine: Creating Disk image...
	I0918 12:42:48.881617    4460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:42:48.881755    4460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2
	I0918 12:42:48.890142    4460 main.go:141] libmachine: STDOUT: 
	I0918 12:42:48.890161    4460 main.go:141] libmachine: STDERR: 
	I0918 12:42:48.890212    4460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2 +20000M
	I0918 12:42:48.897510    4460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:42:48.897524    4460 main.go:141] libmachine: STDERR: 
	I0918 12:42:48.897546    4460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2
	I0918 12:42:48.897557    4460 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:42:48.897597    4460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:75:5f:22:23:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kindnet-716000/disk.qcow2
	I0918 12:42:48.899136    4460 main.go:141] libmachine: STDOUT: 
	I0918 12:42:48.899149    4460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:42:48.899164    4460 client.go:171] LocalClient.Create took 269.609333ms
	I0918 12:42:50.901330    4460 start.go:128] duration metric: createHost completed in 2.325278417s
	I0918 12:42:50.901417    4460 start.go:83] releasing machines lock for "kindnet-716000", held for 2.325801834s
	W0918 12:42:50.901834    4460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:50.911429    4460 out.go:177] 
	W0918 12:42:50.915501    4460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:42:50.915536    4460 out.go:239] * 
	* 
	W0918 12:42:50.917992    4460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:42:50.926420    4460 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.012511791s)

                                                
                                                
-- stdout --
	* [calico-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-716000 in cluster calico-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:42:53.096085    4578 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:42:53.096208    4578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:42:53.096212    4578 out.go:309] Setting ErrFile to fd 2...
	I0918 12:42:53.096215    4578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:42:53.096340    4578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:42:53.097435    4578 out.go:303] Setting JSON to false
	I0918 12:42:53.112376    4578 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4347,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:42:53.112457    4578 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:42:53.117663    4578 out.go:177] * [calico-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:42:53.125642    4578 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:42:53.129618    4578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:42:53.125710    4578 notify.go:220] Checking for updates...
	I0918 12:42:53.132642    4578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:42:53.135551    4578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:42:53.138609    4578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:42:53.141620    4578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:42:53.144815    4578 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:42:53.148568    4578 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:42:53.155541    4578 start.go:298] selected driver: qemu2
	I0918 12:42:53.155548    4578 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:42:53.155553    4578 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:42:53.157573    4578 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:42:53.161632    4578 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:42:53.164699    4578 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:42:53.164728    4578 cni.go:84] Creating CNI manager for "calico"
	I0918 12:42:53.164731    4578 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0918 12:42:53.164737    4578 start_flags.go:321] config:
	{Name:calico-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:calico-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0918 12:42:53.168784    4578 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:42:53.175571    4578 out.go:177] * Starting control plane node calico-716000 in cluster calico-716000
	I0918 12:42:53.179436    4578 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:42:53.179456    4578 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:42:53.179470    4578 cache.go:57] Caching tarball of preloaded images
	I0918 12:42:53.179532    4578 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:42:53.179538    4578 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:42:53.179762    4578 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/calico-716000/config.json ...
	I0918 12:42:53.179776    4578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/calico-716000/config.json: {Name:mkcf035dac230d181287091f7177adaa52a8f1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:42:53.179993    4578 start.go:365] acquiring machines lock for calico-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:42:53.180022    4578 start.go:369] acquired machines lock for "calico-716000" in 23.667µs
	I0918 12:42:53.180033    4578 start.go:93] Provisioning new machine with config: &{Name:calico-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:42:53.180064    4578 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:42:53.188457    4578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:42:53.203989    4578 start.go:159] libmachine.API.Create for "calico-716000" (driver="qemu2")
	I0918 12:42:53.204019    4578 client.go:168] LocalClient.Create starting
	I0918 12:42:53.204072    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:42:53.204097    4578 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:53.204111    4578 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:53.204148    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:42:53.204174    4578 main.go:141] libmachine: Decoding PEM data...
	I0918 12:42:53.204182    4578 main.go:141] libmachine: Parsing certificate...
	I0918 12:42:53.204493    4578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:42:53.317666    4578 main.go:141] libmachine: Creating SSH key...
	I0918 12:42:53.503982    4578 main.go:141] libmachine: Creating Disk image...
	I0918 12:42:53.503992    4578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:42:53.504164    4578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2
	I0918 12:42:53.512760    4578 main.go:141] libmachine: STDOUT: 
	I0918 12:42:53.512775    4578 main.go:141] libmachine: STDERR: 
	I0918 12:42:53.512837    4578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2 +20000M
	I0918 12:42:53.520119    4578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:42:53.520142    4578 main.go:141] libmachine: STDERR: 
	I0918 12:42:53.520165    4578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2
	I0918 12:42:53.520171    4578 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:42:53.520209    4578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:4a:47:26:23:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2
	I0918 12:42:53.521794    4578 main.go:141] libmachine: STDOUT: 
	I0918 12:42:53.521814    4578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:42:53.521836    4578 client.go:171] LocalClient.Create took 317.818542ms
	I0918 12:42:55.523965    4578 start.go:128] duration metric: createHost completed in 2.343927583s
	I0918 12:42:55.524022    4578 start.go:83] releasing machines lock for "calico-716000", held for 2.344033833s
	W0918 12:42:55.524085    4578 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:55.537320    4578 out.go:177] * Deleting "calico-716000" in qemu2 ...
	W0918 12:42:55.558177    4578 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:42:55.558207    4578 start.go:703] Will try again in 5 seconds ...
	I0918 12:43:00.560467    4578 start.go:365] acquiring machines lock for calico-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:00.561022    4578 start.go:369] acquired machines lock for "calico-716000" in 441.792µs
	I0918 12:43:00.561183    4578 start.go:93] Provisioning new machine with config: &{Name:calico-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:calico-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:00.561493    4578 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:00.569228    4578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:00.617878    4578 start.go:159] libmachine.API.Create for "calico-716000" (driver="qemu2")
	I0918 12:43:00.617936    4578 client.go:168] LocalClient.Create starting
	I0918 12:43:00.618065    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:00.618125    4578 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:00.618146    4578 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:00.618225    4578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:00.618266    4578 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:00.618280    4578 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:00.618815    4578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:00.746192    4578 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:01.019051    4578 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:01.019063    4578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:01.019244    4578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2
	I0918 12:43:01.028273    4578 main.go:141] libmachine: STDOUT: 
	I0918 12:43:01.028286    4578 main.go:141] libmachine: STDERR: 
	I0918 12:43:01.028353    4578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2 +20000M
	I0918 12:43:01.035603    4578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:01.035638    4578 main.go:141] libmachine: STDERR: 
	I0918 12:43:01.035654    4578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2
	I0918 12:43:01.035661    4578 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:01.035697    4578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:42:2f:cf:4a:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/calico-716000/disk.qcow2
	I0918 12:43:01.037301    4578 main.go:141] libmachine: STDOUT: 
	I0918 12:43:01.037322    4578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:01.037337    4578 client.go:171] LocalClient.Create took 419.403708ms
	I0918 12:43:03.039473    4578 start.go:128] duration metric: createHost completed in 2.47799825s
	I0918 12:43:03.039536    4578 start.go:83] releasing machines lock for "calico-716000", held for 2.478537625s
	W0918 12:43:03.039953    4578 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:03.051535    4578 out.go:177] 
	W0918 12:43:03.055681    4578 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:43:03.055742    4578 out.go:239] * 
	* 
	W0918 12:43:03.058127    4578 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:43:03.068574    4578 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.734085625s)

                                                
                                                
-- stdout --
	* [custom-flannel-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-716000 in cluster custom-flannel-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:43:05.394579    4698 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:43:05.394744    4698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:05.394746    4698 out.go:309] Setting ErrFile to fd 2...
	I0918 12:43:05.394749    4698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:05.394884    4698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:43:05.395871    4698 out.go:303] Setting JSON to false
	I0918 12:43:05.410943    4698 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4359,"bootTime":1695061826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:43:05.411031    4698 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:43:05.415498    4698 out.go:177] * [custom-flannel-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:43:05.423328    4698 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:43:05.423366    4698 notify.go:220] Checking for updates...
	I0918 12:43:05.430414    4698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:43:05.433403    4698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:43:05.436421    4698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:43:05.439424    4698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:43:05.446470    4698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:43:05.449623    4698 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:43:05.453305    4698 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:43:05.460430    4698 start.go:298] selected driver: qemu2
	I0918 12:43:05.460438    4698 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:43:05.460445    4698 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:43:05.462568    4698 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:43:05.466305    4698 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:43:05.470473    4698 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:43:05.470493    4698 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0918 12:43:05.470503    4698 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0918 12:43:05.470512    4698 start_flags.go:321] config:
	{Name:custom-flannel-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:43:05.474714    4698 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:43:05.480410    4698 out.go:177] * Starting control plane node custom-flannel-716000 in cluster custom-flannel-716000
	I0918 12:43:05.484431    4698 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:43:05.484452    4698 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:43:05.484464    4698 cache.go:57] Caching tarball of preloaded images
	I0918 12:43:05.484535    4698 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:43:05.484541    4698 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:43:05.484777    4698 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/custom-flannel-716000/config.json ...
	I0918 12:43:05.484791    4698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/custom-flannel-716000/config.json: {Name:mk557bd7a953f82aa3cb04fbe42a7782447a252f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:43:05.485012    4698 start.go:365] acquiring machines lock for custom-flannel-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:05.485045    4698 start.go:369] acquired machines lock for "custom-flannel-716000" in 25.625µs
	I0918 12:43:05.485059    4698 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:05.485091    4698 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:05.493391    4698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:05.510238    4698 start.go:159] libmachine.API.Create for "custom-flannel-716000" (driver="qemu2")
	I0918 12:43:05.510266    4698 client.go:168] LocalClient.Create starting
	I0918 12:43:05.510319    4698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:05.510346    4698 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:05.510360    4698 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:05.510399    4698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:05.510419    4698 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:05.510427    4698 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:05.510769    4698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:05.624510    4698 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:05.691775    4698 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:05.691781    4698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:05.691911    4698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2
	I0918 12:43:05.700292    4698 main.go:141] libmachine: STDOUT: 
	I0918 12:43:05.700306    4698 main.go:141] libmachine: STDERR: 
	I0918 12:43:05.700353    4698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2 +20000M
	I0918 12:43:05.707444    4698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:05.707455    4698 main.go:141] libmachine: STDERR: 
	I0918 12:43:05.707470    4698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2
	I0918 12:43:05.707476    4698 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:05.707521    4698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:09:9d:1d:e1:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2
	I0918 12:43:05.708978    4698 main.go:141] libmachine: STDOUT: 
	I0918 12:43:05.708989    4698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:05.709006    4698 client.go:171] LocalClient.Create took 198.739292ms
	I0918 12:43:07.711148    4698 start.go:128] duration metric: createHost completed in 2.226074625s
	I0918 12:43:07.711216    4698 start.go:83] releasing machines lock for "custom-flannel-716000", held for 2.226203375s
	W0918 12:43:07.711280    4698 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:07.718406    4698 out.go:177] * Deleting "custom-flannel-716000" in qemu2 ...
	W0918 12:43:07.744291    4698 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:07.744314    4698 start.go:703] Will try again in 5 seconds ...
	I0918 12:43:12.746466    4698 start.go:365] acquiring machines lock for custom-flannel-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:12.746986    4698 start.go:369] acquired machines lock for "custom-flannel-716000" in 390.25µs
	I0918 12:43:12.747131    4698 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:12.747416    4698 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:12.753125    4698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:12.801126    4698 start.go:159] libmachine.API.Create for "custom-flannel-716000" (driver="qemu2")
	I0918 12:43:12.801186    4698 client.go:168] LocalClient.Create starting
	I0918 12:43:12.801293    4698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:12.801347    4698 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:12.801365    4698 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:12.801420    4698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:12.801454    4698 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:12.801776    4698 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:12.802479    4698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:12.927659    4698 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:13.042169    4698 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:13.042174    4698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:13.042318    4698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2
	I0918 12:43:13.051074    4698 main.go:141] libmachine: STDOUT: 
	I0918 12:43:13.051091    4698 main.go:141] libmachine: STDERR: 
	I0918 12:43:13.051146    4698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2 +20000M
	I0918 12:43:13.058327    4698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:13.058342    4698 main.go:141] libmachine: STDERR: 
	I0918 12:43:13.058355    4698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2
	I0918 12:43:13.058361    4698 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:13.058402    4698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:05:91:d4:66:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/custom-flannel-716000/disk.qcow2
	I0918 12:43:13.059912    4698 main.go:141] libmachine: STDOUT: 
	I0918 12:43:13.059925    4698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:13.059938    4698 client.go:171] LocalClient.Create took 258.751667ms
	I0918 12:43:15.062071    4698 start.go:128] duration metric: createHost completed in 2.314669s
	I0918 12:43:15.062148    4698 start.go:83] releasing machines lock for "custom-flannel-716000", held for 2.315178667s
	W0918 12:43:15.062483    4698 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:15.072129    4698 out.go:177] 
	W0918 12:43:15.076171    4698 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:43:15.076256    4698 out.go:239] * 
	* 
	W0918 12:43:15.078816    4698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:43:15.089120    4698 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.815970708s)

                                                
                                                
-- stdout --
	* [false-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-716000 in cluster false-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:43:17.403313    4816 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:43:17.403430    4816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:17.403432    4816 out.go:309] Setting ErrFile to fd 2...
	I0918 12:43:17.403435    4816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:17.403573    4816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:43:17.404642    4816 out.go:303] Setting JSON to false
	I0918 12:43:17.419672    4816 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4371,"bootTime":1695061826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:43:17.419766    4816 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:43:17.424182    4816 out.go:177] * [false-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:43:17.432137    4816 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:43:17.436154    4816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:43:17.432220    4816 notify.go:220] Checking for updates...
	I0918 12:43:17.439145    4816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:43:17.442079    4816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:43:17.445130    4816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:43:17.448169    4816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:43:17.451235    4816 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:43:17.455122    4816 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:43:17.461077    4816 start.go:298] selected driver: qemu2
	I0918 12:43:17.461083    4816 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:43:17.461088    4816 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:43:17.463104    4816 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:43:17.466134    4816 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:43:17.469241    4816 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:43:17.469270    4816 cni.go:84] Creating CNI manager for "false"
	I0918 12:43:17.469275    4816 start_flags.go:321] config:
	{Name:false-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:false-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0918 12:43:17.473656    4816 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:43:17.477089    4816 out.go:177] * Starting control plane node false-716000 in cluster false-716000
	I0918 12:43:17.485152    4816 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:43:17.485168    4816 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:43:17.485175    4816 cache.go:57] Caching tarball of preloaded images
	I0918 12:43:17.485225    4816 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:43:17.485229    4816 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:43:17.485416    4816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/false-716000/config.json ...
	I0918 12:43:17.485430    4816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/false-716000/config.json: {Name:mkbf544e9e6959130d3f22ba3aefda7ef20f2f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:43:17.485647    4816 start.go:365] acquiring machines lock for false-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:17.485676    4816 start.go:369] acquired machines lock for "false-716000" in 23.917µs
	I0918 12:43:17.485690    4816 start.go:93] Provisioning new machine with config: &{Name:false-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:17.485719    4816 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:17.494130    4816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:17.510260    4816 start.go:159] libmachine.API.Create for "false-716000" (driver="qemu2")
	I0918 12:43:17.510286    4816 client.go:168] LocalClient.Create starting
	I0918 12:43:17.510343    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:17.510372    4816 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:17.510389    4816 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:17.510431    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:17.510449    4816 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:17.510457    4816 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:17.510780    4816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:17.623969    4816 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:17.758191    4816 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:17.758203    4816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:17.758358    4816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2
	I0918 12:43:17.766861    4816 main.go:141] libmachine: STDOUT: 
	I0918 12:43:17.766876    4816 main.go:141] libmachine: STDERR: 
	I0918 12:43:17.766944    4816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2 +20000M
	I0918 12:43:17.774261    4816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:17.774275    4816 main.go:141] libmachine: STDERR: 
	I0918 12:43:17.774289    4816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2
	I0918 12:43:17.774297    4816 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:17.774339    4816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:88:9f:81:1b:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2
	I0918 12:43:17.775896    4816 main.go:141] libmachine: STDOUT: 
	I0918 12:43:17.775909    4816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:17.775929    4816 client.go:171] LocalClient.Create took 265.642625ms
	I0918 12:43:19.778115    4816 start.go:128] duration metric: createHost completed in 2.292405792s
	I0918 12:43:19.778184    4816 start.go:83] releasing machines lock for "false-716000", held for 2.292541834s
	W0918 12:43:19.778235    4816 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:19.787742    4816 out.go:177] * Deleting "false-716000" in qemu2 ...
	W0918 12:43:19.808083    4816 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:19.808107    4816 start.go:703] Will try again in 5 seconds ...
	I0918 12:43:24.810317    4816 start.go:365] acquiring machines lock for false-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:24.810842    4816 start.go:369] acquired machines lock for "false-716000" in 369.541µs
	I0918 12:43:24.810976    4816 start.go:93] Provisioning new machine with config: &{Name:false-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:false-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:24.811208    4816 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:24.820895    4816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:24.868398    4816 start.go:159] libmachine.API.Create for "false-716000" (driver="qemu2")
	I0918 12:43:24.868440    4816 client.go:168] LocalClient.Create starting
	I0918 12:43:24.868592    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:24.868648    4816 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:24.868672    4816 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:24.868752    4816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:24.868788    4816 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:24.868800    4816 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:24.869260    4816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:24.995492    4816 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:25.134340    4816 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:25.134346    4816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:25.134497    4816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2
	I0918 12:43:25.143217    4816 main.go:141] libmachine: STDOUT: 
	I0918 12:43:25.143230    4816 main.go:141] libmachine: STDERR: 
	I0918 12:43:25.143290    4816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2 +20000M
	I0918 12:43:25.150460    4816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:25.150474    4816 main.go:141] libmachine: STDERR: 
	I0918 12:43:25.150489    4816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2
	I0918 12:43:25.150493    4816 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:25.150527    4816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:83:98:75:8b:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/false-716000/disk.qcow2
	I0918 12:43:25.152126    4816 main.go:141] libmachine: STDOUT: 
	I0918 12:43:25.152140    4816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:25.152151    4816 client.go:171] LocalClient.Create took 283.708ms
	I0918 12:43:27.154333    4816 start.go:128] duration metric: createHost completed in 2.343121542s
	I0918 12:43:27.154430    4816 start.go:83] releasing machines lock for "false-716000", held for 2.343609417s
	W0918 12:43:27.154893    4816 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:27.160746    4816 out.go:177] 
	W0918 12:43:27.167713    4816 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:43:27.167737    4816 out.go:239] * 
	* 
	W0918 12:43:27.170508    4816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:43:27.179736    4816 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0918 12:43:38.745412    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.926464s)

                                                
                                                
-- stdout --
	* [enable-default-cni-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-716000 in cluster enable-default-cni-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:43:29.315491    4931 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:43:29.315626    4931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:29.315629    4931 out.go:309] Setting ErrFile to fd 2...
	I0918 12:43:29.315631    4931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:29.315755    4931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:43:29.316752    4931 out.go:303] Setting JSON to false
	I0918 12:43:29.331701    4931 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4383,"bootTime":1695061826,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:43:29.331771    4931 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:43:29.337777    4931 out.go:177] * [enable-default-cni-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:43:29.345688    4931 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:43:29.349706    4931 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:43:29.345729    4931 notify.go:220] Checking for updates...
	I0918 12:43:29.355621    4931 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:43:29.358684    4931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:43:29.361659    4931 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:43:29.364690    4931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:43:29.367858    4931 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:43:29.371643    4931 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:43:29.378679    4931 start.go:298] selected driver: qemu2
	I0918 12:43:29.378687    4931 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:43:29.378693    4931 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:43:29.380706    4931 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:43:29.383625    4931 out.go:177] * Automatically selected the socket_vmnet network
	E0918 12:43:29.386740    4931 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0918 12:43:29.386753    4931 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:43:29.386786    4931 cni.go:84] Creating CNI manager for "bridge"
	I0918 12:43:29.386791    4931 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:43:29.386796    4931 start_flags.go:321] config:
	{Name:enable-default-cni-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:43:29.391014    4931 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:43:29.397628    4931 out.go:177] * Starting control plane node enable-default-cni-716000 in cluster enable-default-cni-716000
	I0918 12:43:29.401654    4931 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:43:29.401675    4931 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:43:29.401683    4931 cache.go:57] Caching tarball of preloaded images
	I0918 12:43:29.401755    4931 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:43:29.401761    4931 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:43:29.401971    4931 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/enable-default-cni-716000/config.json ...
	I0918 12:43:29.401985    4931 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/enable-default-cni-716000/config.json: {Name:mk96cc99145dba6d01efaa546999763b94ca61f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:43:29.402193    4931 start.go:365] acquiring machines lock for enable-default-cni-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:29.402226    4931 start.go:369] acquired machines lock for "enable-default-cni-716000" in 25.875µs
	I0918 12:43:29.402239    4931 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:29.402272    4931 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:29.410797    4931 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:29.427402    4931 start.go:159] libmachine.API.Create for "enable-default-cni-716000" (driver="qemu2")
	I0918 12:43:29.427434    4931 client.go:168] LocalClient.Create starting
	I0918 12:43:29.427507    4931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:29.427537    4931 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:29.427549    4931 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:29.427589    4931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:29.427608    4931 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:29.427616    4931 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:29.427938    4931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:29.540864    4931 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:29.779460    4931 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:29.779475    4931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:29.779641    4931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2
	I0918 12:43:29.788428    4931 main.go:141] libmachine: STDOUT: 
	I0918 12:43:29.788444    4931 main.go:141] libmachine: STDERR: 
	I0918 12:43:29.788498    4931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2 +20000M
	I0918 12:43:29.795797    4931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:29.795809    4931 main.go:141] libmachine: STDERR: 
	I0918 12:43:29.795825    4931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2
	I0918 12:43:29.795835    4931 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:29.795875    4931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:7e:d8:8d:36:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2
	I0918 12:43:29.797430    4931 main.go:141] libmachine: STDOUT: 
	I0918 12:43:29.797446    4931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:29.797467    4931 client.go:171] LocalClient.Create took 370.035125ms
	I0918 12:43:31.799644    4931 start.go:128] duration metric: createHost completed in 2.397386042s
	I0918 12:43:31.799715    4931 start.go:83] releasing machines lock for "enable-default-cni-716000", held for 2.397523417s
	W0918 12:43:31.799787    4931 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:31.806172    4931 out.go:177] * Deleting "enable-default-cni-716000" in qemu2 ...
	W0918 12:43:31.828238    4931 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:31.828275    4931 start.go:703] Will try again in 5 seconds ...
	I0918 12:43:36.830494    4931 start.go:365] acquiring machines lock for enable-default-cni-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:36.831012    4931 start.go:369] acquired machines lock for "enable-default-cni-716000" in 394.292µs
	I0918 12:43:36.831136    4931 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:36.831416    4931 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:36.839056    4931 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:36.887397    4931 start.go:159] libmachine.API.Create for "enable-default-cni-716000" (driver="qemu2")
	I0918 12:43:36.887437    4931 client.go:168] LocalClient.Create starting
	I0918 12:43:36.887563    4931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:36.887618    4931 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:36.887651    4931 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:36.887724    4931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:36.887765    4931 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:36.887779    4931 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:36.888254    4931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:37.015196    4931 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:37.152771    4931 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:37.152782    4931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:37.152907    4931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2
	I0918 12:43:37.161391    4931 main.go:141] libmachine: STDOUT: 
	I0918 12:43:37.161404    4931 main.go:141] libmachine: STDERR: 
	I0918 12:43:37.161453    4931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2 +20000M
	I0918 12:43:37.168551    4931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:37.168564    4931 main.go:141] libmachine: STDERR: 
	I0918 12:43:37.168576    4931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2
	I0918 12:43:37.168583    4931 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:37.168624    4931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:c8:16:59:6f:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/enable-default-cni-716000/disk.qcow2
	I0918 12:43:37.170153    4931 main.go:141] libmachine: STDOUT: 
	I0918 12:43:37.170166    4931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:37.170177    4931 client.go:171] LocalClient.Create took 282.739167ms
	I0918 12:43:39.172366    4931 start.go:128] duration metric: createHost completed in 2.340950833s
	I0918 12:43:39.172509    4931 start.go:83] releasing machines lock for "enable-default-cni-716000", held for 2.341471958s
	W0918 12:43:39.172877    4931 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:39.181595    4931 out.go:177] 
	W0918 12:43:39.186713    4931 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:43:39.186747    4931 out.go:239] * 
	* 
	W0918 12:43:39.189767    4931 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:43:39.197618    4931 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.753708041s)

                                                
                                                
-- stdout --
	* [flannel-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-716000 in cluster flannel-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:43:41.342427    5043 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:43:41.342544    5043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:41.342546    5043 out.go:309] Setting ErrFile to fd 2...
	I0918 12:43:41.342549    5043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:41.342676    5043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:43:41.343691    5043 out.go:303] Setting JSON to false
	I0918 12:43:41.358652    5043 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4395,"bootTime":1695061826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:43:41.358736    5043 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:43:41.363584    5043 out.go:177] * [flannel-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:43:41.371583    5043 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:43:41.371660    5043 notify.go:220] Checking for updates...
	I0918 12:43:41.379522    5043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:43:41.383366    5043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:43:41.386510    5043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:43:41.389551    5043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:43:41.392566    5043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:43:41.395726    5043 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:43:41.399517    5043 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:43:41.406476    5043 start.go:298] selected driver: qemu2
	I0918 12:43:41.406482    5043 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:43:41.406489    5043 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:43:41.408453    5043 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:43:41.411568    5043 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:43:41.414652    5043 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:43:41.414678    5043 cni.go:84] Creating CNI manager for "flannel"
	I0918 12:43:41.414682    5043 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0918 12:43:41.414690    5043 start_flags.go:321] config:
	{Name:flannel-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:flannel-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:43:41.418918    5043 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:43:41.426558    5043 out.go:177] * Starting control plane node flannel-716000 in cluster flannel-716000
	I0918 12:43:41.429428    5043 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:43:41.429444    5043 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:43:41.429453    5043 cache.go:57] Caching tarball of preloaded images
	I0918 12:43:41.429503    5043 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:43:41.429508    5043 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:43:41.429705    5043 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/flannel-716000/config.json ...
	I0918 12:43:41.429719    5043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/flannel-716000/config.json: {Name:mkddd20cc1e8f1043cf1e4fc89787a46a5c7727b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:43:41.429944    5043 start.go:365] acquiring machines lock for flannel-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:41.429974    5043 start.go:369] acquired machines lock for "flannel-716000" in 24.084µs
	I0918 12:43:41.430000    5043 start.go:93] Provisioning new machine with config: &{Name:flannel-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:41.430030    5043 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:41.436507    5043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:41.452329    5043 start.go:159] libmachine.API.Create for "flannel-716000" (driver="qemu2")
	I0918 12:43:41.452352    5043 client.go:168] LocalClient.Create starting
	I0918 12:43:41.452407    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:41.452433    5043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:41.452446    5043 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:41.452484    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:41.452504    5043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:41.452511    5043 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:41.452841    5043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:41.565415    5043 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:41.668401    5043 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:41.668408    5043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:41.668552    5043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2
	I0918 12:43:41.676877    5043 main.go:141] libmachine: STDOUT: 
	I0918 12:43:41.676896    5043 main.go:141] libmachine: STDERR: 
	I0918 12:43:41.676958    5043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2 +20000M
	I0918 12:43:41.684053    5043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:41.684067    5043 main.go:141] libmachine: STDERR: 
	I0918 12:43:41.684086    5043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2
	I0918 12:43:41.684098    5043 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:41.684139    5043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:0d:6d:21:dc:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2
	I0918 12:43:41.685562    5043 main.go:141] libmachine: STDOUT: 
	I0918 12:43:41.685575    5043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:41.685594    5043 client.go:171] LocalClient.Create took 233.240208ms
	I0918 12:43:43.687760    5043 start.go:128] duration metric: createHost completed in 2.257749667s
	I0918 12:43:43.687824    5043 start.go:83] releasing machines lock for "flannel-716000", held for 2.257882833s
	W0918 12:43:43.687912    5043 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:43.696850    5043 out.go:177] * Deleting "flannel-716000" in qemu2 ...
	W0918 12:43:43.717846    5043 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:43.717869    5043 start.go:703] Will try again in 5 seconds ...
	I0918 12:43:48.720026    5043 start.go:365] acquiring machines lock for flannel-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:48.720484    5043 start.go:369] acquired machines lock for "flannel-716000" in 342.834µs
	I0918 12:43:48.720607    5043 start.go:93] Provisioning new machine with config: &{Name:flannel-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:flannel-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:48.720903    5043 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:48.730427    5043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:48.779049    5043 start.go:159] libmachine.API.Create for "flannel-716000" (driver="qemu2")
	I0918 12:43:48.779086    5043 client.go:168] LocalClient.Create starting
	I0918 12:43:48.779234    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:48.779304    5043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:48.779326    5043 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:48.779408    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:48.779451    5043 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:48.779468    5043 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:48.780011    5043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:48.907171    5043 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:49.012480    5043 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:49.012486    5043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:49.012637    5043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2
	I0918 12:43:49.020999    5043 main.go:141] libmachine: STDOUT: 
	I0918 12:43:49.021013    5043 main.go:141] libmachine: STDERR: 
	I0918 12:43:49.021061    5043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2 +20000M
	I0918 12:43:49.028160    5043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:49.028171    5043 main.go:141] libmachine: STDERR: 
	I0918 12:43:49.028186    5043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2
	I0918 12:43:49.028192    5043 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:49.028226    5043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:11:9f:1b:14:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/flannel-716000/disk.qcow2
	I0918 12:43:49.029732    5043 main.go:141] libmachine: STDOUT: 
	I0918 12:43:49.029743    5043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:49.029755    5043 client.go:171] LocalClient.Create took 250.665ms
	I0918 12:43:51.031927    5043 start.go:128] duration metric: createHost completed in 2.311023417s
	I0918 12:43:51.032079    5043 start.go:83] releasing machines lock for "flannel-716000", held for 2.311588708s
	W0918 12:43:51.032487    5043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:51.040953    5043 out.go:177] 
	W0918 12:43:51.045950    5043 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:43:51.045983    5043 out.go:239] * 
	* 
	W0918 12:43:51.048547    5043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:43:51.057910    5043 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
E0918 12:44:00.230039    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.804999916s)

                                                
                                                
-- stdout --
	* [bridge-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-716000 in cluster bridge-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:43:53.381298    5165 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:43:53.381411    5165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:53.381414    5165 out.go:309] Setting ErrFile to fd 2...
	I0918 12:43:53.381416    5165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:43:53.381561    5165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:43:53.382588    5165 out.go:303] Setting JSON to false
	I0918 12:43:53.397633    5165 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4407,"bootTime":1695061826,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:43:53.397714    5165 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:43:53.403321    5165 out.go:177] * [bridge-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:43:53.411256    5165 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:43:53.411294    5165 notify.go:220] Checking for updates...
	I0918 12:43:53.415280    5165 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:43:53.418319    5165 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:43:53.421219    5165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:43:53.424266    5165 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:43:53.427289    5165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:43:53.430388    5165 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:43:53.434281    5165 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:43:53.441196    5165 start.go:298] selected driver: qemu2
	I0918 12:43:53.441201    5165 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:43:53.441206    5165 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:43:53.443052    5165 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:43:53.446302    5165 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:43:53.449366    5165 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:43:53.449388    5165 cni.go:84] Creating CNI manager for "bridge"
	I0918 12:43:53.449395    5165 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:43:53.449400    5165 start_flags.go:321] config:
	{Name:bridge-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:bridge-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0918 12:43:53.453633    5165 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:43:53.460249    5165 out.go:177] * Starting control plane node bridge-716000 in cluster bridge-716000
	I0918 12:43:53.464289    5165 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:43:53.464307    5165 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:43:53.464315    5165 cache.go:57] Caching tarball of preloaded images
	I0918 12:43:53.464374    5165 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:43:53.464380    5165 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:43:53.464598    5165 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/bridge-716000/config.json ...
	I0918 12:43:53.464611    5165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/bridge-716000/config.json: {Name:mkc485faeb1decfde12e56cca5172aeec0f18087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:43:53.464827    5165 start.go:365] acquiring machines lock for bridge-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:43:53.464859    5165 start.go:369] acquired machines lock for "bridge-716000" in 26.667µs
	I0918 12:43:53.464871    5165 start.go:93] Provisioning new machine with config: &{Name:bridge-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:43:53.464899    5165 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:43:53.473266    5165 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:43:53.489288    5165 start.go:159] libmachine.API.Create for "bridge-716000" (driver="qemu2")
	I0918 12:43:53.489318    5165 client.go:168] LocalClient.Create starting
	I0918 12:43:53.489374    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:43:53.489416    5165 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:53.489433    5165 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:53.489466    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:43:53.489486    5165 main.go:141] libmachine: Decoding PEM data...
	I0918 12:43:53.489494    5165 main.go:141] libmachine: Parsing certificate...
	I0918 12:43:53.489805    5165 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:43:53.605128    5165 main.go:141] libmachine: Creating SSH key...
	I0918 12:43:53.719964    5165 main.go:141] libmachine: Creating Disk image...
	I0918 12:43:53.719975    5165 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:43:53.720119    5165 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2
	I0918 12:43:53.728465    5165 main.go:141] libmachine: STDOUT: 
	I0918 12:43:53.728480    5165 main.go:141] libmachine: STDERR: 
	I0918 12:43:53.728539    5165 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2 +20000M
	I0918 12:43:53.735720    5165 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:43:53.735732    5165 main.go:141] libmachine: STDERR: 
	I0918 12:43:53.735753    5165 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2
	I0918 12:43:53.735761    5165 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:43:53.735798    5165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:26:b1:f5:86:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2
	I0918 12:43:53.737324    5165 main.go:141] libmachine: STDOUT: 
	I0918 12:43:53.737337    5165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:43:53.737355    5165 client.go:171] LocalClient.Create took 248.036917ms
	I0918 12:43:55.739478    5165 start.go:128] duration metric: createHost completed in 2.27460325s
	I0918 12:43:55.739552    5165 start.go:83] releasing machines lock for "bridge-716000", held for 2.274725084s
	W0918 12:43:55.739616    5165 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:55.747929    5165 out.go:177] * Deleting "bridge-716000" in qemu2 ...
	W0918 12:43:55.768416    5165 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:43:55.768446    5165 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:00.770591    5165 start.go:365] acquiring machines lock for bridge-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:00.771106    5165 start.go:369] acquired machines lock for "bridge-716000" in 403.667µs
	I0918 12:44:00.771224    5165 start.go:93] Provisioning new machine with config: &{Name:bridge-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:bridge-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:00.771450    5165 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:00.779066    5165 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:44:00.826426    5165 start.go:159] libmachine.API.Create for "bridge-716000" (driver="qemu2")
	I0918 12:44:00.826471    5165 client.go:168] LocalClient.Create starting
	I0918 12:44:00.826601    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:00.826650    5165 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:00.826671    5165 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:00.826741    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:00.826776    5165 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:00.826793    5165 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:00.827275    5165 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:00.952965    5165 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:01.101734    5165 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:01.101740    5165 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:01.101905    5165 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2
	I0918 12:44:01.110626    5165 main.go:141] libmachine: STDOUT: 
	I0918 12:44:01.110644    5165 main.go:141] libmachine: STDERR: 
	I0918 12:44:01.110711    5165 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2 +20000M
	I0918 12:44:01.117926    5165 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:01.117949    5165 main.go:141] libmachine: STDERR: 
	I0918 12:44:01.117961    5165 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2
	I0918 12:44:01.117969    5165 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:01.118004    5165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:2b:82:e6:d1:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/bridge-716000/disk.qcow2
	I0918 12:44:01.119513    5165 main.go:141] libmachine: STDOUT: 
	I0918 12:44:01.119526    5165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:01.119538    5165 client.go:171] LocalClient.Create took 293.064875ms
	I0918 12:44:03.121671    5165 start.go:128] duration metric: createHost completed in 2.350235208s
	I0918 12:44:03.121727    5165 start.go:83] releasing machines lock for "bridge-716000", held for 2.350636834s
	W0918 12:44:03.122073    5165 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:03.132852    5165 out.go:177] 
	W0918 12:44:03.136905    5165 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:03.136929    5165 out.go:239] * 
	* 
	W0918 12:44:03.139501    5165 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:03.146440    5165 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-716000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.7352055s)

                                                
                                                
-- stdout --
	* [kubenet-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-716000 in cluster kubenet-716000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:05.263664    5275 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:05.263786    5275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:05.263791    5275 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:05.263794    5275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:05.263928    5275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:05.264898    5275 out.go:303] Setting JSON to false
	I0918 12:44:05.279824    5275 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4419,"bootTime":1695061826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:05.279888    5275 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:05.285708    5275 out.go:177] * [kubenet-716000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:05.293749    5275 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:05.293809    5275 notify.go:220] Checking for updates...
	I0918 12:44:05.297700    5275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:05.300732    5275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:05.303693    5275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:05.306664    5275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:05.309697    5275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:05.312919    5275 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:05.316677    5275 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:44:05.323688    5275 start.go:298] selected driver: qemu2
	I0918 12:44:05.323694    5275 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:44:05.323699    5275 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:05.325629    5275 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:44:05.328687    5275 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:44:05.331780    5275 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:05.331802    5275 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0918 12:44:05.331806    5275 start_flags.go:321] config:
	{Name:kubenet-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:05.335885    5275 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:05.342696    5275 out.go:177] * Starting control plane node kubenet-716000 in cluster kubenet-716000
	I0918 12:44:05.346700    5275 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:05.346717    5275 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:44:05.346726    5275 cache.go:57] Caching tarball of preloaded images
	I0918 12:44:05.346797    5275 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:44:05.346803    5275 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:44:05.347039    5275 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kubenet-716000/config.json ...
	I0918 12:44:05.347051    5275 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/kubenet-716000/config.json: {Name:mka43c95e3b46fbc4db6c6f53db9eb87f175aa12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:44:05.347268    5275 start.go:365] acquiring machines lock for kubenet-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:05.347297    5275 start.go:369] acquired machines lock for "kubenet-716000" in 23.916µs
	I0918 12:44:05.347309    5275 start.go:93] Provisioning new machine with config: &{Name:kubenet-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:05.347340    5275 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:05.355729    5275 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:44:05.371173    5275 start.go:159] libmachine.API.Create for "kubenet-716000" (driver="qemu2")
	I0918 12:44:05.371199    5275 client.go:168] LocalClient.Create starting
	I0918 12:44:05.371257    5275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:05.371281    5275 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:05.371293    5275 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:05.371328    5275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:05.371347    5275 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:05.371355    5275 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:05.371698    5275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:05.486406    5275 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:05.611828    5275 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:05.611836    5275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:05.611972    5275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2
	I0918 12:44:05.620455    5275 main.go:141] libmachine: STDOUT: 
	I0918 12:44:05.620471    5275 main.go:141] libmachine: STDERR: 
	I0918 12:44:05.620523    5275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2 +20000M
	I0918 12:44:05.627616    5275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:05.627628    5275 main.go:141] libmachine: STDERR: 
	I0918 12:44:05.627650    5275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2
	I0918 12:44:05.627658    5275 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:05.627697    5275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e0:bc:5f:ca:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2
	I0918 12:44:05.629168    5275 main.go:141] libmachine: STDOUT: 
	I0918 12:44:05.629181    5275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:05.629197    5275 client.go:171] LocalClient.Create took 257.998459ms
	I0918 12:44:07.631362    5275 start.go:128] duration metric: createHost completed in 2.284036458s
	I0918 12:44:07.631542    5275 start.go:83] releasing machines lock for "kubenet-716000", held for 2.28418025s
	W0918 12:44:07.631605    5275 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:07.639079    5275 out.go:177] * Deleting "kubenet-716000" in qemu2 ...
	W0918 12:44:07.659054    5275 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:07.659084    5275 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:12.661221    5275 start.go:365] acquiring machines lock for kubenet-716000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:12.661710    5275 start.go:369] acquired machines lock for "kubenet-716000" in 382.542µs
	I0918 12:44:12.661852    5275 start.go:93] Provisioning new machine with config: &{Name:kubenet-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:kubenet-716000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:12.662108    5275 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:12.668733    5275 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 12:44:12.717215    5275 start.go:159] libmachine.API.Create for "kubenet-716000" (driver="qemu2")
	I0918 12:44:12.717255    5275 client.go:168] LocalClient.Create starting
	I0918 12:44:12.717395    5275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:12.717451    5275 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:12.717478    5275 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:12.717551    5275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:12.717593    5275 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:12.717612    5275 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:12.718147    5275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:12.845645    5275 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:12.911429    5275 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:12.911434    5275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:12.911581    5275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2
	I0918 12:44:12.920025    5275 main.go:141] libmachine: STDOUT: 
	I0918 12:44:12.920042    5275 main.go:141] libmachine: STDERR: 
	I0918 12:44:12.920112    5275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2 +20000M
	I0918 12:44:12.927510    5275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:12.927524    5275 main.go:141] libmachine: STDERR: 
	I0918 12:44:12.927539    5275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2
	I0918 12:44:12.927547    5275 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:12.927585    5275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ab:0e:6e:53:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/kubenet-716000/disk.qcow2
	I0918 12:44:12.929120    5275 main.go:141] libmachine: STDOUT: 
	I0918 12:44:12.929135    5275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:12.929154    5275 client.go:171] LocalClient.Create took 211.891292ms
	I0918 12:44:14.931288    5275 start.go:128] duration metric: createHost completed in 2.269193333s
	I0918 12:44:14.931354    5275 start.go:83] releasing machines lock for "kubenet-716000", held for 2.269662125s
	W0918 12:44:14.931762    5275 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:14.942403    5275 out.go:177] 
	W0918 12:44:14.946493    5275 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:14.946515    5275 out.go:239] * 
	* 
	W0918 12:44:14.949348    5275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:14.959401    5275 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-933000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-933000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.766288209s)

                                                
                                                
-- stdout --
	* [old-k8s-version-933000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-933000 in cluster old-k8s-version-933000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-933000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:17.069787    5390 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:17.069928    5390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:17.069931    5390 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:17.069933    5390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:17.070058    5390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:17.071084    5390 out.go:303] Setting JSON to false
	I0918 12:44:17.086370    5390 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4431,"bootTime":1695061826,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:17.086460    5390 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:17.091367    5390 out.go:177] * [old-k8s-version-933000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:17.095366    5390 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:17.099302    5390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:17.095420    5390 notify.go:220] Checking for updates...
	I0918 12:44:17.103328    5390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:17.106403    5390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:17.109267    5390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:17.112313    5390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:17.115462    5390 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:17.119331    5390 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:44:17.126309    5390 start.go:298] selected driver: qemu2
	I0918 12:44:17.126315    5390 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:44:17.126325    5390 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:17.128371    5390 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:44:17.131325    5390 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:44:17.134436    5390 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:17.134454    5390 cni.go:84] Creating CNI manager for ""
	I0918 12:44:17.134461    5390 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:44:17.134465    5390 start_flags.go:321] config:
	{Name:old-k8s-version-933000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:17.138498    5390 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:17.146369    5390 out.go:177] * Starting control plane node old-k8s-version-933000 in cluster old-k8s-version-933000
	I0918 12:44:17.150301    5390 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 12:44:17.150316    5390 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 12:44:17.150320    5390 cache.go:57] Caching tarball of preloaded images
	I0918 12:44:17.150367    5390 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:44:17.150372    5390 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0918 12:44:17.150573    5390 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/old-k8s-version-933000/config.json ...
	I0918 12:44:17.150585    5390 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/old-k8s-version-933000/config.json: {Name:mkf147cea30f16c6549141c414086b4f70205e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:44:17.150796    5390 start.go:365] acquiring machines lock for old-k8s-version-933000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:17.150825    5390 start.go:369] acquired machines lock for "old-k8s-version-933000" in 22.375µs
	I0918 12:44:17.150836    5390 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:17.150863    5390 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:17.159328    5390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:17.173642    5390 start.go:159] libmachine.API.Create for "old-k8s-version-933000" (driver="qemu2")
	I0918 12:44:17.173665    5390 client.go:168] LocalClient.Create starting
	I0918 12:44:17.173721    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:17.173749    5390 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:17.173764    5390 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:17.173796    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:17.173818    5390 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:17.173824    5390 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:17.174129    5390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:17.290383    5390 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:17.355705    5390 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:17.355711    5390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:17.355856    5390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:17.364336    5390 main.go:141] libmachine: STDOUT: 
	I0918 12:44:17.364350    5390 main.go:141] libmachine: STDERR: 
	I0918 12:44:17.364409    5390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2 +20000M
	I0918 12:44:17.371571    5390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:17.371583    5390 main.go:141] libmachine: STDERR: 
	I0918 12:44:17.371594    5390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:17.371601    5390 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:17.371637    5390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:69:b6:ab:95:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:17.373262    5390 main.go:141] libmachine: STDOUT: 
	I0918 12:44:17.373275    5390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:17.373293    5390 client.go:171] LocalClient.Create took 199.627583ms
	I0918 12:44:19.375435    5390 start.go:128] duration metric: createHost completed in 2.224589459s
	I0918 12:44:19.375490    5390 start.go:83] releasing machines lock for "old-k8s-version-933000", held for 2.224698083s
	W0918 12:44:19.375552    5390 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:19.384766    5390 out.go:177] * Deleting "old-k8s-version-933000" in qemu2 ...
	W0918 12:44:19.409457    5390 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:19.409486    5390 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:24.411745    5390 start.go:365] acquiring machines lock for old-k8s-version-933000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:24.412167    5390 start.go:369] acquired machines lock for "old-k8s-version-933000" in 318.042µs
	I0918 12:44:24.412293    5390 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:24.412532    5390 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:24.419245    5390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:24.467188    5390 start.go:159] libmachine.API.Create for "old-k8s-version-933000" (driver="qemu2")
	I0918 12:44:24.467242    5390 client.go:168] LocalClient.Create starting
	I0918 12:44:24.467371    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:24.467430    5390 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:24.467458    5390 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:24.467524    5390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:24.467564    5390 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:24.467582    5390 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:24.468137    5390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:24.594339    5390 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:24.752046    5390 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:24.752059    5390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:24.752214    5390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:24.761172    5390 main.go:141] libmachine: STDOUT: 
	I0918 12:44:24.761186    5390 main.go:141] libmachine: STDERR: 
	I0918 12:44:24.761249    5390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2 +20000M
	I0918 12:44:24.768504    5390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:24.768516    5390 main.go:141] libmachine: STDERR: 
	I0918 12:44:24.768530    5390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:24.768537    5390 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:24.768582    5390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:40:22:8e:61:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:24.770133    5390 main.go:141] libmachine: STDOUT: 
	I0918 12:44:24.770148    5390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:24.770161    5390 client.go:171] LocalClient.Create took 302.917375ms
	I0918 12:44:26.772295    5390 start.go:128] duration metric: createHost completed in 2.359777667s
	I0918 12:44:26.772359    5390 start.go:83] releasing machines lock for "old-k8s-version-933000", held for 2.36021275s
	W0918 12:44:26.772719    5390 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-933000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-933000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:26.781329    5390 out.go:177] 
	W0918 12:44:26.784466    5390 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:26.784496    5390 out.go:239] * 
	* 
	W0918 12:44:26.786903    5390 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:26.796433    5390 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-933000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (65.096583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-933000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-933000 create -f testdata/busybox.yaml: exit status 1 (29.627208ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-933000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (28.255125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (27.75775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-933000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-933000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-933000 describe deploy/metrics-server -n kube-system: exit status 1 (25.680875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-933000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-933000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (28.408417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-933000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-933000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.182683458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-933000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-933000 in cluster old-k8s-version-933000
	* Restarting existing qemu2 VM for "old-k8s-version-933000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-933000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:27.249435    5425 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:27.249568    5425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:27.249570    5425 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:27.249573    5425 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:27.249697    5425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:27.250737    5425 out.go:303] Setting JSON to false
	I0918 12:44:27.265732    5425 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4441,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:27.265816    5425 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:27.269835    5425 out.go:177] * [old-k8s-version-933000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:27.275726    5425 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:27.279706    5425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:27.275759    5425 notify.go:220] Checking for updates...
	I0918 12:44:27.285687    5425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:27.288717    5425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:27.289992    5425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:27.292684    5425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:27.296008    5425 config.go:182] Loaded profile config "old-k8s-version-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0918 12:44:27.299703    5425 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0918 12:44:27.302661    5425 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:27.306652    5425 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:44:27.313707    5425 start.go:298] selected driver: qemu2
	I0918 12:44:27.313715    5425 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-933000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:27.313779    5425 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:27.315859    5425 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:27.315893    5425 cni.go:84] Creating CNI manager for ""
	I0918 12:44:27.315900    5425 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:44:27.315909    5425 start_flags.go:321] config:
	{Name:old-k8s-version-933000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-933000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:27.319989    5425 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:27.326651    5425 out.go:177] * Starting control plane node old-k8s-version-933000 in cluster old-k8s-version-933000
	I0918 12:44:27.330697    5425 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 12:44:27.330715    5425 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 12:44:27.330728    5425 cache.go:57] Caching tarball of preloaded images
	I0918 12:44:27.330787    5425 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:44:27.330793    5425 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0918 12:44:27.330880    5425 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/old-k8s-version-933000/config.json ...
	I0918 12:44:27.331252    5425 start.go:365] acquiring machines lock for old-k8s-version-933000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:27.331278    5425 start.go:369] acquired machines lock for "old-k8s-version-933000" in 20.25µs
	I0918 12:44:27.331288    5425 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:44:27.331292    5425 fix.go:54] fixHost starting: 
	I0918 12:44:27.331407    5425 fix.go:102] recreateIfNeeded on old-k8s-version-933000: state=Stopped err=<nil>
	W0918 12:44:27.331415    5425 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:44:27.335675    5425 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-933000" ...
	I0918 12:44:27.343729    5425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:40:22:8e:61:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:27.345594    5425 main.go:141] libmachine: STDOUT: 
	I0918 12:44:27.345612    5425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:27.345639    5425 fix.go:56] fixHost completed within 14.346791ms
	I0918 12:44:27.345645    5425 start.go:83] releasing machines lock for "old-k8s-version-933000", held for 14.363208ms
	W0918 12:44:27.345652    5425 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:27.345689    5425 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:27.345694    5425 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:32.347821    5425 start.go:365] acquiring machines lock for old-k8s-version-933000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:32.348337    5425 start.go:369] acquired machines lock for "old-k8s-version-933000" in 406.875µs
	I0918 12:44:32.348523    5425 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:44:32.348543    5425 fix.go:54] fixHost starting: 
	I0918 12:44:32.349397    5425 fix.go:102] recreateIfNeeded on old-k8s-version-933000: state=Stopped err=<nil>
	W0918 12:44:32.349424    5425 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:44:32.357782    5425 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-933000" ...
	I0918 12:44:32.361116    5425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:40:22:8e:61:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/old-k8s-version-933000/disk.qcow2
	I0918 12:44:32.370018    5425 main.go:141] libmachine: STDOUT: 
	I0918 12:44:32.370065    5425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:32.370142    5425 fix.go:56] fixHost completed within 21.599291ms
	I0918 12:44:32.370161    5425 start.go:83] releasing machines lock for "old-k8s-version-933000", held for 21.800958ms
	W0918 12:44:32.370317    5425 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-933000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-933000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:32.378878    5425 out.go:177] 
	W0918 12:44:32.382950    5425 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:32.382979    5425 out.go:239] * 
	* 
	W0918 12:44:32.385817    5425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:32.392931    5425 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-933000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (65.358625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-933000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (31.475584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-933000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-933000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-933000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.940875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-933000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-933000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (27.999083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-933000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-933000 "sudo crictl images -o json": exit status 89 (36.022417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-933000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-933000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-933000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (27.6725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-933000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-933000 --alsologtostderr -v=1: exit status 89 (40.126ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-933000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:32.649355    5444 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:32.649746    5444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:32.649749    5444 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:32.649752    5444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:32.649921    5444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:32.650132    5444 out.go:303] Setting JSON to false
	I0918 12:44:32.650141    5444 mustload.go:65] Loading cluster: old-k8s-version-933000
	I0918 12:44:32.650339    5444 config.go:182] Loaded profile config "old-k8s-version-933000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0918 12:44:32.654355    5444 out.go:177] * The control plane node must be running for this command
	I0918 12:44:32.658466    5444 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-933000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-933000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (27.676041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (27.85625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-933000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-249000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-249000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.639957208s)

                                                
                                                
-- stdout --
	* [no-preload-249000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-249000 in cluster no-preload-249000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-249000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:33.111173    5467 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:33.111301    5467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:33.111304    5467 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:33.111307    5467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:33.111473    5467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:33.112476    5467 out.go:303] Setting JSON to false
	I0918 12:44:33.127488    5467 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4447,"bootTime":1695061826,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:33.127581    5467 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:33.132516    5467 out.go:177] * [no-preload-249000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:33.139420    5467 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:33.139503    5467 notify.go:220] Checking for updates...
	I0918 12:44:33.143428    5467 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:33.146333    5467 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:33.149390    5467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:33.152403    5467 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:33.155385    5467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:33.158670    5467 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:33.162414    5467 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:44:33.169401    5467 start.go:298] selected driver: qemu2
	I0918 12:44:33.169408    5467 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:44:33.169413    5467 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:33.171380    5467 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:44:33.174425    5467 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:44:33.177403    5467 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:33.177421    5467 cni.go:84] Creating CNI manager for ""
	I0918 12:44:33.177429    5467 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:44:33.177433    5467 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:44:33.177440    5467 start_flags.go:321] config:
	{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:33.181612    5467 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.188414    5467 out.go:177] * Starting control plane node no-preload-249000 in cluster no-preload-249000
	I0918 12:44:33.192422    5467 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:33.192541    5467 cache.go:107] acquiring lock: {Name:mk05360895828a594941f02758702e2ee3934c43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192554    5467 cache.go:107] acquiring lock: {Name:mkb4e1494b3cfbe5cc04f6f8e525176492d4c441 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192556    5467 cache.go:107] acquiring lock: {Name:mkd2c651e69a929e15471a0b04768b3babcd14aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192689    5467 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0918 12:44:33.192695    5467 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I0918 12:44:33.192700    5467 cache.go:107] acquiring lock: {Name:mk60f799e1b1dc4eb16bed1d8cf8203565eb8d64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192716    5467 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I0918 12:44:33.192735    5467 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/no-preload-249000/config.json ...
	I0918 12:44:33.192539    5467 cache.go:107] acquiring lock: {Name:mk66aa807de4a41bb93b7968a361b55b7b9dc442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192796    5467 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I0918 12:44:33.192793    5467 cache.go:107] acquiring lock: {Name:mk16c86c2092140a82402d28511092f1c95af497 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192541    5467 cache.go:107] acquiring lock: {Name:mk7ca9e887704d4480491a5c81d1c5e1fad73157 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192814    5467 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 12:44:33.192821    5467 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 296µs
	I0918 12:44:33.192828    5467 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 12:44:33.192832    5467 cache.go:107] acquiring lock: {Name:mk8e758ef34e93eabe6599c8eb50bf6615524a35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:33.192832    5467 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/no-preload-249000/config.json: {Name:mk6325f5988dd9869e0567abaf1f2e3e970253e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:44:33.192885    5467 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I0918 12:44:33.192955    5467 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0918 12:44:33.192968    5467 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0918 12:44:33.193031    5467 start.go:365] acquiring machines lock for no-preload-249000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:33.193063    5467 start.go:369] acquired machines lock for "no-preload-249000" in 23.708µs
	I0918 12:44:33.193077    5467 start.go:93] Provisioning new machine with config: &{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:33.193111    5467 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:33.201357    5467 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:33.207851    5467 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I0918 12:44:33.207885    5467 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0918 12:44:33.208443    5467 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I0918 12:44:33.209430    5467 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0918 12:44:33.209579    5467 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I0918 12:44:33.209637    5467 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I0918 12:44:33.211386    5467 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0918 12:44:33.216632    5467 start.go:159] libmachine.API.Create for "no-preload-249000" (driver="qemu2")
	I0918 12:44:33.216656    5467 client.go:168] LocalClient.Create starting
	I0918 12:44:33.216724    5467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:33.216753    5467 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:33.216764    5467 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:33.216802    5467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:33.216820    5467 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:33.216826    5467 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:33.217140    5467 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:33.335430    5467 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:33.389185    5467 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:33.389196    5467 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:33.389335    5467 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:33.397963    5467 main.go:141] libmachine: STDOUT: 
	I0918 12:44:33.397978    5467 main.go:141] libmachine: STDERR: 
	I0918 12:44:33.398027    5467 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2 +20000M
	I0918 12:44:33.406028    5467 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:33.406055    5467 main.go:141] libmachine: STDERR: 
	I0918 12:44:33.406076    5467 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:33.406082    5467 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:33.406129    5467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:bd:ef:3d:18:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:33.407803    5467 main.go:141] libmachine: STDOUT: 
	I0918 12:44:33.407819    5467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:33.407838    5467 client.go:171] LocalClient.Create took 191.182416ms
	I0918 12:44:33.802568    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2
	I0918 12:44:33.832348    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0918 12:44:33.932096    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0918 12:44:33.932115    5467 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 739.573834ms
	I0918 12:44:33.932125    5467 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0918 12:44:34.014260    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2
	I0918 12:44:34.234427    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0918 12:44:34.487775    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2
	I0918 12:44:34.646441    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2
	I0918 12:44:34.832406    5467 cache.go:162] opening:  /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0918 12:44:35.408039    5467 start.go:128] duration metric: createHost completed in 2.214921584s
	I0918 12:44:35.408124    5467 start.go:83] releasing machines lock for "no-preload-249000", held for 2.215091333s
	W0918 12:44:35.408189    5467 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:35.418208    5467 out.go:177] * Deleting "no-preload-249000" in qemu2 ...
	W0918 12:44:35.436938    5467 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:35.437061    5467 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:36.170929    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0918 12:44:36.170977    5467 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.978212417s
	I0918 12:44:36.171003    5467 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0918 12:44:36.269198    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0918 12:44:36.269245    5467 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 3.076771208s
	I0918 12:44:36.269278    5467 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0918 12:44:37.626028    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0918 12:44:37.626101    5467 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 4.43361725s
	I0918 12:44:37.626172    5467 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0918 12:44:38.262460    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0918 12:44:38.262534    5467 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 5.070099s
	I0918 12:44:38.262562    5467 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0918 12:44:38.287232    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0918 12:44:38.287270    5467 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 5.094691s
	I0918 12:44:38.287293    5467 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0918 12:44:40.437183    5467 start.go:365] acquiring machines lock for no-preload-249000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:40.437569    5467 start.go:369] acquired machines lock for "no-preload-249000" in 314.458µs
	I0918 12:44:40.437702    5467 start.go:93] Provisioning new machine with config: &{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:40.437915    5467 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:40.446722    5467 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:40.493523    5467 start.go:159] libmachine.API.Create for "no-preload-249000" (driver="qemu2")
	I0918 12:44:40.493566    5467 client.go:168] LocalClient.Create starting
	I0918 12:44:40.493701    5467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:40.493761    5467 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:40.493786    5467 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:40.493880    5467 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:40.493929    5467 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:40.493948    5467 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:40.494487    5467 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:40.622990    5467 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:40.667832    5467 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:40.667839    5467 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:40.668002    5467 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:40.676568    5467 main.go:141] libmachine: STDOUT: 
	I0918 12:44:40.676594    5467 main.go:141] libmachine: STDERR: 
	I0918 12:44:40.676663    5467 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2 +20000M
	I0918 12:44:40.683997    5467 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:40.684017    5467 main.go:141] libmachine: STDERR: 
	I0918 12:44:40.684032    5467 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:40.684040    5467 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:40.684085    5467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7b:36:68:1f:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:40.685682    5467 main.go:141] libmachine: STDOUT: 
	I0918 12:44:40.685694    5467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:40.685706    5467 client.go:171] LocalClient.Create took 192.139459ms
	I0918 12:44:42.031602    5467 cache.go:157] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0918 12:44:42.031659    5467 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 8.839089875s
	I0918 12:44:42.031686    5467 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0918 12:44:42.031727    5467 cache.go:87] Successfully saved all images to host disk.
	I0918 12:44:42.687914    5467 start.go:128] duration metric: createHost completed in 2.250012416s
	I0918 12:44:42.687975    5467 start.go:83] releasing machines lock for "no-preload-249000", held for 2.2504255s
	W0918 12:44:42.688229    5467 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:42.696671    5467 out.go:177] 
	W0918 12:44:42.700779    5467 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:42.700805    5467 out.go:239] * 
	* 
	W0918 12:44:42.704015    5467 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:42.712980    5467 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-249000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (64.264042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-249000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-249000 create -f testdata/busybox.yaml: exit status 1 (29.081708ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-249000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (28.44025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (27.901125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-249000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-249000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-249000 describe deploy/metrics-server -n kube-system: exit status 1 (25.808959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-249000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-249000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (28.281708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-249000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-249000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (6.963257959s)

                                                
                                                
-- stdout --
	* [no-preload-249000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-249000 in cluster no-preload-249000
	* Restarting existing qemu2 VM for "no-preload-249000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-249000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:43.166840    5599 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:43.166964    5599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:43.166967    5599 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:43.166969    5599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:43.167095    5599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:43.168086    5599 out.go:303] Setting JSON to false
	I0918 12:44:43.183095    5599 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4457,"bootTime":1695061826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:43.183185    5599 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:43.187592    5599 out.go:177] * [no-preload-249000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:43.194763    5599 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:43.197646    5599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:43.194812    5599 notify.go:220] Checking for updates...
	I0918 12:44:43.203714    5599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:43.205092    5599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:43.207680    5599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:43.210783    5599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:43.214069    5599 config.go:182] Loaded profile config "no-preload-249000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:44:43.214352    5599 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:43.218755    5599 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:44:43.225737    5599 start.go:298] selected driver: qemu2
	I0918 12:44:43.225742    5599 start.go:902] validating driver "qemu2" against &{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:43.225790    5599 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:43.227739    5599 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:43.227764    5599 cni.go:84] Creating CNI manager for ""
	I0918 12:44:43.227772    5599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:44:43.227777    5599 start_flags.go:321] config:
	{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:43.231767    5599 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.234733    5599 out.go:177] * Starting control plane node no-preload-249000 in cluster no-preload-249000
	I0918 12:44:43.242682    5599 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:43.242748    5599 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/no-preload-249000/config.json ...
	I0918 12:44:43.242777    5599 cache.go:107] acquiring lock: {Name:mk60f799e1b1dc4eb16bed1d8cf8203565eb8d64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242781    5599 cache.go:107] acquiring lock: {Name:mk66aa807de4a41bb93b7968a361b55b7b9dc442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242806    5599 cache.go:107] acquiring lock: {Name:mk7ca9e887704d4480491a5c81d1c5e1fad73157 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242841    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0918 12:44:43.242847    5599 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 79.459µs
	I0918 12:44:43.242854    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0918 12:44:43.242861    5599 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 55.958µs
	I0918 12:44:43.242867    5599 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0918 12:44:43.242868    5599 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0918 12:44:43.242878    5599 cache.go:107] acquiring lock: {Name:mkd2c651e69a929e15471a0b04768b3babcd14aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242888    5599 cache.go:107] acquiring lock: {Name:mkb4e1494b3cfbe5cc04f6f8e525176492d4c441 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242915    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0918 12:44:43.242918    5599 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 46.75µs
	I0918 12:44:43.242922    5599 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0918 12:44:43.242928    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0918 12:44:43.242928    5599 cache.go:107] acquiring lock: {Name:mk8e758ef34e93eabe6599c8eb50bf6615524a35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242933    5599 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 45.5µs
	I0918 12:44:43.242937    5599 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0918 12:44:43.242964    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0918 12:44:43.242968    5599 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 41.417µs
	I0918 12:44:43.242972    5599 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0918 12:44:43.242964    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 12:44:43.242976    5599 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 197.75µs
	I0918 12:44:43.242980    5599 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 12:44:43.242946    5599 cache.go:107] acquiring lock: {Name:mk05360895828a594941f02758702e2ee3934c43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.243031    5599 cache.go:107] acquiring lock: {Name:mk16c86c2092140a82402d28511092f1c95af497 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.243062    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0918 12:44:43.243066    5599 start.go:365] acquiring machines lock for no-preload-249000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:43.243068    5599 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 149.917µs
	I0918 12:44:43.243072    5599 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0918 12:44:43.243087    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0918 12:44:43.243088    5599 start.go:369] acquired machines lock for "no-preload-249000" in 18.041µs
	I0918 12:44:43.243090    5599 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 129.333µs
	I0918 12:44:43.243094    5599 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0918 12:44:43.243098    5599 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:44:43.243099    5599 cache.go:87] Successfully saved all images to host disk.
	I0918 12:44:43.243103    5599 fix.go:54] fixHost starting: 
	I0918 12:44:43.243219    5599 fix.go:102] recreateIfNeeded on no-preload-249000: state=Stopped err=<nil>
	W0918 12:44:43.243228    5599 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:44:43.255709    5599 out.go:177] * Restarting existing qemu2 VM for "no-preload-249000" ...
	I0918 12:44:43.259811    5599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7b:36:68:1f:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:43.261690    5599 main.go:141] libmachine: STDOUT: 
	I0918 12:44:43.261709    5599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:43.261733    5599 fix.go:56] fixHost completed within 18.631041ms
	I0918 12:44:43.261735    5599 start.go:83] releasing machines lock for "no-preload-249000", held for 18.64475ms
	W0918 12:44:43.261742    5599 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:43.261770    5599 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:43.261774    5599 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:48.263963    5599 start.go:365] acquiring machines lock for no-preload-249000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:50.036572    5599 start.go:369] acquired machines lock for "no-preload-249000" in 1.772522125s
	I0918 12:44:50.036717    5599 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:44:50.036752    5599 fix.go:54] fixHost starting: 
	I0918 12:44:50.037460    5599 fix.go:102] recreateIfNeeded on no-preload-249000: state=Stopped err=<nil>
	W0918 12:44:50.037489    5599 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:44:50.042266    5599 out.go:177] * Restarting existing qemu2 VM for "no-preload-249000" ...
	I0918 12:44:50.053752    5599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7b:36:68:1f:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/no-preload-249000/disk.qcow2
	I0918 12:44:50.062761    5599 main.go:141] libmachine: STDOUT: 
	I0918 12:44:50.062828    5599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:50.062920    5599 fix.go:56] fixHost completed within 26.1685ms
	I0918 12:44:50.062938    5599 start.go:83] releasing machines lock for "no-preload-249000", held for 26.333708ms
	W0918 12:44:50.063147    5599 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-249000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-249000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:50.072077    5599 out.go:177] 
	W0918 12:44:50.076088    5599 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:50.076106    5599 out.go:239] * 
	* 
	W0918 12:44:50.078144    5599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:50.088969    5599 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-249000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (63.569667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe start -p stopped-upgrade-869000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe start -p stopped-upgrade-869000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe: permission denied (5.526458ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe start -p stopped-upgrade-869000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe start -p stopped-upgrade-869000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe: permission denied (5.337875ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe start -p stopped-upgrade-869000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe start -p stopped-upgrade-869000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe: permission denied (5.198625ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.1693533721.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-869000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-869000: exit status 85 (113.484709ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-716000 sudo                                  | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo                                  | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo                                  | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo cat                              | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo cat                              | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo                                  | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo                                  | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo                                  | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo find                             | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-716000 sudo crio                             | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-716000                                       | bridge-716000          | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	| start   | -p kubenet-716000                                      | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | --memory=3072                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                               |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo crictl                          | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo crictl                          | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | ps --all                                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo find                            | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo ip a s                          | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	| ssh     | -p kubenet-716000 sudo ip r s                          | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | iptables -t nat -L -n -v                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status kubelet --all                         |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat kubelet                                  |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo docker                          | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo cat                             | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo                                 | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo find                            | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-716000 sudo crio                            | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-716000                                      | kubenet-716000         | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	| start   | -p old-k8s-version-933000                              | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-933000        | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-933000                              | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-933000             | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-933000                              | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-933000 sudo                         | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-933000                              | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-933000                              | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	| delete  | -p old-k8s-version-933000                              | old-k8s-version-933000 | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	| start   | -p no-preload-249000                                   | no-preload-249000      | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=qemu2                         |                        |         |         |                     |                     |
	|         |  --kubernetes-version=v1.28.2                          |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-249000             | no-preload-249000      | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-249000                                   | no-preload-249000      | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-249000                  | no-preload-249000      | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT | 18 Sep 23 12:44 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-249000                                   | no-preload-249000      | jenkins | v1.31.2 | 18 Sep 23 12:44 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=qemu2                         |                        |         |         |                     |                     |
	|         |  --kubernetes-version=v1.28.2                          |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 12:44:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:44:43.166840    5599 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:43.166964    5599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:43.166967    5599 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:43.166969    5599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:43.167095    5599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:43.168086    5599 out.go:303] Setting JSON to false
	I0918 12:44:43.183095    5599 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4457,"bootTime":1695061826,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:43.183185    5599 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:43.187592    5599 out.go:177] * [no-preload-249000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:43.194763    5599 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:43.197646    5599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:43.194812    5599 notify.go:220] Checking for updates...
	I0918 12:44:43.203714    5599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:43.205092    5599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:43.207680    5599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:43.210783    5599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:43.214069    5599 config.go:182] Loaded profile config "no-preload-249000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:44:43.214352    5599 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:43.218755    5599 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:44:43.225737    5599 start.go:298] selected driver: qemu2
	I0918 12:44:43.225742    5599 start.go:902] validating driver "qemu2" against &{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:43.225790    5599 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:43.227739    5599 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:43.227764    5599 cni.go:84] Creating CNI manager for ""
	I0918 12:44:43.227772    5599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:44:43.227777    5599 start_flags.go:321] config:
	{Name:no-preload-249000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-249000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:43.231767    5599 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.234733    5599 out.go:177] * Starting control plane node no-preload-249000 in cluster no-preload-249000
	I0918 12:44:43.242682    5599 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:43.242748    5599 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/no-preload-249000/config.json ...
	I0918 12:44:43.242777    5599 cache.go:107] acquiring lock: {Name:mk60f799e1b1dc4eb16bed1d8cf8203565eb8d64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242781    5599 cache.go:107] acquiring lock: {Name:mk66aa807de4a41bb93b7968a361b55b7b9dc442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242806    5599 cache.go:107] acquiring lock: {Name:mk7ca9e887704d4480491a5c81d1c5e1fad73157 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242841    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I0918 12:44:43.242847    5599 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2" took 79.459µs
	I0918 12:44:43.242854    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I0918 12:44:43.242861    5599 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2" took 55.958µs
	I0918 12:44:43.242867    5599 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I0918 12:44:43.242868    5599 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I0918 12:44:43.242878    5599 cache.go:107] acquiring lock: {Name:mkd2c651e69a929e15471a0b04768b3babcd14aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242888    5599 cache.go:107] acquiring lock: {Name:mkb4e1494b3cfbe5cc04f6f8e525176492d4c441 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242915    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I0918 12:44:43.242918    5599 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2" took 46.75µs
	I0918 12:44:43.242922    5599 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I0918 12:44:43.242928    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0918 12:44:43.242928    5599 cache.go:107] acquiring lock: {Name:mk8e758ef34e93eabe6599c8eb50bf6615524a35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.242933    5599 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 45.5µs
	I0918 12:44:43.242937    5599 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0918 12:44:43.242964    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0918 12:44:43.242968    5599 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 41.417µs
	I0918 12:44:43.242972    5599 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0918 12:44:43.242964    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 12:44:43.242976    5599 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 197.75µs
	I0918 12:44:43.242980    5599 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 12:44:43.242946    5599 cache.go:107] acquiring lock: {Name:mk05360895828a594941f02758702e2ee3934c43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.243031    5599 cache.go:107] acquiring lock: {Name:mk16c86c2092140a82402d28511092f1c95af497 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:43.243062    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I0918 12:44:43.243066    5599 start.go:365] acquiring machines lock for no-preload-249000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:43.243068    5599 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2" took 149.917µs
	I0918 12:44:43.243072    5599 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I0918 12:44:43.243087    5599 cache.go:115] /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0918 12:44:43.243088    5599 start.go:369] acquired machines lock for "no-preload-249000" in 18.041µs
	I0918 12:44:43.243090    5599 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 129.333µs
	I0918 12:44:43.243094    5599 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0918 12:44:43.243098    5599 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:44:43.243099    5599 cache.go:87] Successfully saved all images to host disk.
	I0918 12:44:43.243103    5599 fix.go:54] fixHost starting: 
	I0918 12:44:43.243219    5599 fix.go:102] recreateIfNeeded on no-preload-249000: state=Stopped err=<nil>
	W0918 12:44:43.243228    5599 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:44:43.255709    5599 out.go:177] * Restarting existing qemu2 VM for "no-preload-249000" ...
	
	* 
	* Profile "stopped-upgrade-869000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-869000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-330000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-330000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.946721709s)

                                                
                                                
-- stdout --
	* [embed-certs-330000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-330000 in cluster embed-certs-330000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:47.669643    5628 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:47.669769    5628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:47.669772    5628 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:47.669774    5628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:47.669896    5628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:47.670916    5628 out.go:303] Setting JSON to false
	I0918 12:44:47.685891    5628 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4461,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:47.685964    5628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:47.691308    5628 out.go:177] * [embed-certs-330000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:47.698281    5628 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:47.702317    5628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:47.698348    5628 notify.go:220] Checking for updates...
	I0918 12:44:47.708258    5628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:47.711271    5628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:47.712793    5628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:47.716233    5628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:47.719591    5628 config.go:182] Loaded profile config "no-preload-249000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:44:47.719634    5628 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:47.724102    5628 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:44:47.731258    5628 start.go:298] selected driver: qemu2
	I0918 12:44:47.731264    5628 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:44:47.731269    5628 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:47.733378    5628 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:44:47.736226    5628 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:44:47.739372    5628 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:47.739399    5628 cni.go:84] Creating CNI manager for ""
	I0918 12:44:47.739416    5628 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:44:47.739429    5628 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:44:47.739434    5628 start_flags.go:321] config:
	{Name:embed-certs-330000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:47.743866    5628 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:47.750235    5628 out.go:177] * Starting control plane node embed-certs-330000 in cluster embed-certs-330000
	I0918 12:44:47.754249    5628 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:47.754287    5628 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:44:47.754301    5628 cache.go:57] Caching tarball of preloaded images
	I0918 12:44:47.754373    5628 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:44:47.754379    5628 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:44:47.754453    5628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/embed-certs-330000/config.json ...
	I0918 12:44:47.754467    5628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/embed-certs-330000/config.json: {Name:mk1512894e96e1f8ae70c67962b31cbdcc8600d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:44:47.754686    5628 start.go:365] acquiring machines lock for embed-certs-330000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:47.754717    5628 start.go:369] acquired machines lock for "embed-certs-330000" in 25.584µs
	I0918 12:44:47.754730    5628 start.go:93] Provisioning new machine with config: &{Name:embed-certs-330000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:47.754762    5628 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:47.763214    5628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:47.779805    5628 start.go:159] libmachine.API.Create for "embed-certs-330000" (driver="qemu2")
	I0918 12:44:47.779831    5628 client.go:168] LocalClient.Create starting
	I0918 12:44:47.779887    5628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:47.779912    5628 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:47.779922    5628 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:47.779960    5628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:47.779979    5628 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:47.779988    5628 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:47.780381    5628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:47.900789    5628 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:48.016676    5628 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:48.016685    5628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:48.016834    5628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:48.025285    5628 main.go:141] libmachine: STDOUT: 
	I0918 12:44:48.025298    5628 main.go:141] libmachine: STDERR: 
	I0918 12:44:48.025345    5628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2 +20000M
	I0918 12:44:48.032445    5628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:48.032458    5628 main.go:141] libmachine: STDERR: 
	I0918 12:44:48.032479    5628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:48.032487    5628 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:48.032537    5628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:dc:c3:f9:a5:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:48.034106    5628 main.go:141] libmachine: STDOUT: 
	I0918 12:44:48.034118    5628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:48.034138    5628 client.go:171] LocalClient.Create took 254.305708ms
	I0918 12:44:50.036286    5628 start.go:128] duration metric: createHost completed in 2.281546375s
	I0918 12:44:50.036421    5628 start.go:83] releasing machines lock for "embed-certs-330000", held for 2.281703416s
	W0918 12:44:50.036492    5628 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:50.052012    5628 out.go:177] * Deleting "embed-certs-330000" in qemu2 ...
	W0918 12:44:50.100362    5628 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:50.100399    5628 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:55.102518    5628 start.go:365] acquiring machines lock for embed-certs-330000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:55.102864    5628 start.go:369] acquired machines lock for "embed-certs-330000" in 268.333µs
	I0918 12:44:55.102985    5628 start.go:93] Provisioning new machine with config: &{Name:embed-certs-330000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:55.103246    5628 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:55.108729    5628 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:55.153851    5628 start.go:159] libmachine.API.Create for "embed-certs-330000" (driver="qemu2")
	I0918 12:44:55.153897    5628 client.go:168] LocalClient.Create starting
	I0918 12:44:55.154011    5628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:55.154077    5628 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:55.154106    5628 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:55.154201    5628 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:55.154242    5628 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:55.154259    5628 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:55.154735    5628 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:55.286896    5628 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:55.525903    5628 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:55.525917    5628 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:55.526053    5628 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:55.534658    5628 main.go:141] libmachine: STDOUT: 
	I0918 12:44:55.534673    5628 main.go:141] libmachine: STDERR: 
	I0918 12:44:55.534728    5628 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2 +20000M
	I0918 12:44:55.542038    5628 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:55.542059    5628 main.go:141] libmachine: STDERR: 
	I0918 12:44:55.542076    5628 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:55.542080    5628 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:55.542133    5628 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:84:ad:7a:df:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:55.543657    5628 main.go:141] libmachine: STDOUT: 
	I0918 12:44:55.543671    5628 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:55.543684    5628 client.go:171] LocalClient.Create took 389.789333ms
	I0918 12:44:57.545859    5628 start.go:128] duration metric: createHost completed in 2.442626875s
	I0918 12:44:57.545916    5628 start.go:83] releasing machines lock for "embed-certs-330000", held for 2.44307225s
	W0918 12:44:57.546335    5628 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:57.556993    5628 out.go:177] 
	W0918 12:44:57.561034    5628 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:57.561061    5628 out.go:239] * 
	* 
	W0918 12:44:57.563698    5628 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:44:57.576926    5628 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-330000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (62.656958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-249000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (29.931709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-249000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-249000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-249000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.46225ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-249000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-249000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (27.648459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-249000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-249000 "sudo crictl images -o json": exit status 89 (39.640083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-249000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-249000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-249000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (27.327584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-249000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-249000 --alsologtostderr -v=1: exit status 89 (38.237ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-249000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:50.347667    5650 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:50.347823    5650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:50.347826    5650 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:50.347829    5650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:50.347952    5650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:50.348185    5650 out.go:303] Setting JSON to false
	I0918 12:44:50.348195    5650 mustload.go:65] Loading cluster: no-preload-249000
	I0918 12:44:50.348381    5650 config.go:182] Loaded profile config "no-preload-249000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:44:50.351998    5650 out.go:177] * The control plane node must be running for this command
	I0918 12:44:50.356051    5650 out.go:177]   To start a cluster, run: "minikube start -p no-preload-249000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-249000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (26.8335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (27.042667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-249000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.754297459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-884000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-884000 in cluster default-k8s-diff-port-884000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:51.041105    5687 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:51.041214    5687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:51.041218    5687 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:51.041220    5687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:51.041361    5687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:51.042384    5687 out.go:303] Setting JSON to false
	I0918 12:44:51.057338    5687 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4465,"bootTime":1695061826,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:51.057400    5687 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:51.061889    5687 out.go:177] * [default-k8s-diff-port-884000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:51.068841    5687 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:51.072833    5687 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:51.068934    5687 notify.go:220] Checking for updates...
	I0918 12:44:51.078828    5687 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:51.081801    5687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:51.084823    5687 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:51.087806    5687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:51.091134    5687 config.go:182] Loaded profile config "embed-certs-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:44:51.091178    5687 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:51.095835    5687 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:44:51.102780    5687 start.go:298] selected driver: qemu2
	I0918 12:44:51.102786    5687 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:44:51.102795    5687 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:51.104743    5687 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 12:44:51.107757    5687 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:44:51.110889    5687 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:51.110915    5687 cni.go:84] Creating CNI manager for ""
	I0918 12:44:51.110932    5687 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:44:51.110937    5687 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:44:51.110942    5687 start_flags.go:321] config:
	{Name:default-k8s-diff-port-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-884000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:51.115178    5687 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:51.121789    5687 out.go:177] * Starting control plane node default-k8s-diff-port-884000 in cluster default-k8s-diff-port-884000
	I0918 12:44:51.125749    5687 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:51.125768    5687 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:44:51.125777    5687 cache.go:57] Caching tarball of preloaded images
	I0918 12:44:51.125832    5687 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:44:51.125838    5687 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:44:51.125890    5687 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/default-k8s-diff-port-884000/config.json ...
	I0918 12:44:51.125904    5687 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/default-k8s-diff-port-884000/config.json: {Name:mkf1979f5f9296dde389ba9daea1413bea3dc365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:44:51.126145    5687 start.go:365] acquiring machines lock for default-k8s-diff-port-884000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:51.126184    5687 start.go:369] acquired machines lock for "default-k8s-diff-port-884000" in 30.958µs
	I0918 12:44:51.126198    5687 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-884000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:51.126232    5687 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:51.133868    5687 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:51.149612    5687 start.go:159] libmachine.API.Create for "default-k8s-diff-port-884000" (driver="qemu2")
	I0918 12:44:51.149640    5687 client.go:168] LocalClient.Create starting
	I0918 12:44:51.149699    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:51.149726    5687 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:51.149735    5687 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:51.149771    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:51.149790    5687 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:51.149798    5687 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:51.150110    5687 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:51.266514    5687 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:51.364732    5687 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:51.364742    5687 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:51.364884    5687 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:44:51.373416    5687 main.go:141] libmachine: STDOUT: 
	I0918 12:44:51.373431    5687 main.go:141] libmachine: STDERR: 
	I0918 12:44:51.373475    5687 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2 +20000M
	I0918 12:44:51.380579    5687 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:51.380593    5687 main.go:141] libmachine: STDERR: 
	I0918 12:44:51.380607    5687 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:44:51.380615    5687 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:51.380656    5687 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:dd:e5:07:50:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:44:51.382144    5687 main.go:141] libmachine: STDOUT: 
	I0918 12:44:51.382157    5687 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:51.382176    5687 client.go:171] LocalClient.Create took 232.534541ms
	I0918 12:44:53.384311    5687 start.go:128] duration metric: createHost completed in 2.25809825s
	I0918 12:44:53.384372    5687 start.go:83] releasing machines lock for "default-k8s-diff-port-884000", held for 2.258220542s
	W0918 12:44:53.384432    5687 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:53.394587    5687 out.go:177] * Deleting "default-k8s-diff-port-884000" in qemu2 ...
	W0918 12:44:53.415626    5687 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:53.415653    5687 start.go:703] Will try again in 5 seconds ...
	I0918 12:44:58.417725    5687 start.go:365] acquiring machines lock for default-k8s-diff-port-884000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:58.418079    5687 start.go:369] acquired machines lock for "default-k8s-diff-port-884000" in 275.625µs
	I0918 12:44:58.418203    5687 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-884000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:44:58.418469    5687 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:44:58.426973    5687 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:44:58.473717    5687 start.go:159] libmachine.API.Create for "default-k8s-diff-port-884000" (driver="qemu2")
	I0918 12:44:58.473774    5687 client.go:168] LocalClient.Create starting
	I0918 12:44:58.473885    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:44:58.473937    5687 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:58.473965    5687 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:58.474040    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:44:58.474074    5687 main.go:141] libmachine: Decoding PEM data...
	I0918 12:44:58.474088    5687 main.go:141] libmachine: Parsing certificate...
	I0918 12:44:58.474629    5687 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:44:58.603690    5687 main.go:141] libmachine: Creating SSH key...
	I0918 12:44:58.710600    5687 main.go:141] libmachine: Creating Disk image...
	I0918 12:44:58.710609    5687 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:44:58.710759    5687 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:44:58.719502    5687 main.go:141] libmachine: STDOUT: 
	I0918 12:44:58.719515    5687 main.go:141] libmachine: STDERR: 
	I0918 12:44:58.719573    5687 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2 +20000M
	I0918 12:44:58.726700    5687 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:44:58.726715    5687 main.go:141] libmachine: STDERR: 
	I0918 12:44:58.726729    5687 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:44:58.726736    5687 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:44:58.726775    5687 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b4:ed:10:59:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:44:58.728319    5687 main.go:141] libmachine: STDOUT: 
	I0918 12:44:58.728332    5687 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:58.728346    5687 client.go:171] LocalClient.Create took 254.572042ms
	I0918 12:45:00.730500    5687 start.go:128] duration metric: createHost completed in 2.312020959s
	I0918 12:45:00.730595    5687 start.go:83] releasing machines lock for "default-k8s-diff-port-884000", held for 2.312539291s
	W0918 12:45:00.731025    5687 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:00.739749    5687 out.go:177] 
	W0918 12:45:00.744791    5687 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:00.744825    5687 out.go:239] * 
	* 
	W0918 12:45:00.747441    5687 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:45:00.756715    5687 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (63.179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-330000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-330000 create -f testdata/busybox.yaml: exit status 1 (31.246583ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-330000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.22475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-330000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-330000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-330000 describe deploy/metrics-server -n kube-system: exit status 1 (26.131625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-330000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-330000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.404583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-330000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-330000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.177818125s)

                                                
                                                
-- stdout --
	* [embed-certs-330000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-330000 in cluster embed-certs-330000
	* Restarting existing qemu2 VM for "embed-certs-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:44:58.022106    5719 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:44:58.022238    5719 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:58.022241    5719 out.go:309] Setting ErrFile to fd 2...
	I0918 12:44:58.022244    5719 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:44:58.022380    5719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:44:58.023378    5719 out.go:303] Setting JSON to false
	I0918 12:44:58.038493    5719 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4472,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:44:58.038581    5719 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:44:58.043121    5719 out.go:177] * [embed-certs-330000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:44:58.054009    5719 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:44:58.050153    5719 notify.go:220] Checking for updates...
	I0918 12:44:58.057981    5719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:44:58.061069    5719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:44:58.064048    5719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:44:58.067054    5719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:44:58.070019    5719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:44:58.073421    5719 config.go:182] Loaded profile config "embed-certs-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:44:58.073701    5719 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:44:58.077975    5719 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:44:58.085067    5719 start.go:298] selected driver: qemu2
	I0918 12:44:58.085075    5719 start.go:902] validating driver "qemu2" against &{Name:embed-certs-330000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-330000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:58.085161    5719 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:44:58.087163    5719 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:44:58.087189    5719 cni.go:84] Creating CNI manager for ""
	I0918 12:44:58.087196    5719 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:44:58.087201    5719 start_flags.go:321] config:
	{Name:embed-certs-330000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-330000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:44:58.091401    5719 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:44:58.098043    5719 out.go:177] * Starting control plane node embed-certs-330000 in cluster embed-certs-330000
	I0918 12:44:58.100970    5719 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:44:58.100988    5719 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:44:58.100993    5719 cache.go:57] Caching tarball of preloaded images
	I0918 12:44:58.101046    5719 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:44:58.101051    5719 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:44:58.101101    5719 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/embed-certs-330000/config.json ...
	I0918 12:44:58.101458    5719 start.go:365] acquiring machines lock for embed-certs-330000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:44:58.101493    5719 start.go:369] acquired machines lock for "embed-certs-330000" in 28.834µs
	I0918 12:44:58.101503    5719 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:44:58.101507    5719 fix.go:54] fixHost starting: 
	I0918 12:44:58.101616    5719 fix.go:102] recreateIfNeeded on embed-certs-330000: state=Stopped err=<nil>
	W0918 12:44:58.101625    5719 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:44:58.108859    5719 out.go:177] * Restarting existing qemu2 VM for "embed-certs-330000" ...
	I0918 12:44:58.113074    5719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:84:ad:7a:df:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:44:58.114894    5719 main.go:141] libmachine: STDOUT: 
	I0918 12:44:58.114917    5719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:44:58.114945    5719 fix.go:56] fixHost completed within 13.436667ms
	I0918 12:44:58.114949    5719 start.go:83] releasing machines lock for "embed-certs-330000", held for 13.453ms
	W0918 12:44:58.114956    5719 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:44:58.114980    5719 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:44:58.114984    5719 start.go:703] Will try again in 5 seconds ...
	I0918 12:45:03.117094    5719 start.go:365] acquiring machines lock for embed-certs-330000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:03.117459    5719 start.go:369] acquired machines lock for "embed-certs-330000" in 273.916µs
	I0918 12:45:03.117595    5719 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:45:03.117613    5719 fix.go:54] fixHost starting: 
	I0918 12:45:03.118302    5719 fix.go:102] recreateIfNeeded on embed-certs-330000: state=Stopped err=<nil>
	W0918 12:45:03.118332    5719 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:45:03.126552    5719 out.go:177] * Restarting existing qemu2 VM for "embed-certs-330000" ...
	I0918 12:45:03.130920    5719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:84:ad:7a:df:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/embed-certs-330000/disk.qcow2
	I0918 12:45:03.139097    5719 main.go:141] libmachine: STDOUT: 
	I0918 12:45:03.139157    5719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:03.139244    5719 fix.go:56] fixHost completed within 21.625584ms
	I0918 12:45:03.139261    5719 start.go:83] releasing machines lock for "embed-certs-330000", held for 21.783958ms
	W0918 12:45:03.139448    5719 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:03.146672    5719 out.go:177] 
	W0918 12:45:03.150748    5719 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:03.150798    5719 out.go:239] * 
	* 
	W0918 12:45:03.153509    5719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:45:03.161748    5719 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-330000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (67.633417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-884000 create -f testdata/busybox.yaml: exit status 1 (30.9135ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-884000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (28.297334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (27.253625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-884000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-884000 describe deploy/metrics-server -n kube-system: exit status 1 (25.671208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-884000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-884000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (27.746291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.191580583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-884000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-884000 in cluster default-k8s-diff-port-884000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-884000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-884000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:45:01.212211    5748 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:45:01.212336    5748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:01.212338    5748 out.go:309] Setting ErrFile to fd 2...
	I0918 12:45:01.212341    5748 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:01.212466    5748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:45:01.213437    5748 out.go:303] Setting JSON to false
	I0918 12:45:01.228607    5748 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4475,"bootTime":1695061826,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:45:01.228667    5748 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:45:01.233617    5748 out.go:177] * [default-k8s-diff-port-884000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:45:01.240567    5748 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:45:01.240628    5748 notify.go:220] Checking for updates...
	I0918 12:45:01.247579    5748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:45:01.250577    5748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:45:01.253561    5748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:45:01.256574    5748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:45:01.259549    5748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:45:01.262884    5748 config.go:182] Loaded profile config "default-k8s-diff-port-884000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:45:01.263151    5748 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:45:01.267585    5748 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:45:01.274556    5748 start.go:298] selected driver: qemu2
	I0918 12:45:01.274564    5748 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-884000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:45:01.274619    5748 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:45:01.276708    5748 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:45:01.276737    5748 cni.go:84] Creating CNI manager for ""
	I0918 12:45:01.276745    5748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:45:01.276755    5748 start_flags.go:321] config:
	{Name:default-k8s-diff-port-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-8840
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:45:01.280866    5748 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:45:01.287574    5748 out.go:177] * Starting control plane node default-k8s-diff-port-884000 in cluster default-k8s-diff-port-884000
	I0918 12:45:01.291616    5748 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:45:01.291642    5748 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:45:01.291655    5748 cache.go:57] Caching tarball of preloaded images
	I0918 12:45:01.291710    5748 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:45:01.291715    5748 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:45:01.291779    5748 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/default-k8s-diff-port-884000/config.json ...
	I0918 12:45:01.292082    5748 start.go:365] acquiring machines lock for default-k8s-diff-port-884000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:01.292107    5748 start.go:369] acquired machines lock for "default-k8s-diff-port-884000" in 19.583µs
	I0918 12:45:01.292117    5748 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:45:01.292122    5748 fix.go:54] fixHost starting: 
	I0918 12:45:01.292238    5748 fix.go:102] recreateIfNeeded on default-k8s-diff-port-884000: state=Stopped err=<nil>
	W0918 12:45:01.292246    5748 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:45:01.296541    5748 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-884000" ...
	I0918 12:45:01.303565    5748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b4:ed:10:59:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:45:01.305400    5748 main.go:141] libmachine: STDOUT: 
	I0918 12:45:01.305417    5748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:01.305443    5748 fix.go:56] fixHost completed within 13.321291ms
	I0918 12:45:01.305447    5748 start.go:83] releasing machines lock for "default-k8s-diff-port-884000", held for 13.335667ms
	W0918 12:45:01.305453    5748 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:01.305494    5748 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:01.305498    5748 start.go:703] Will try again in 5 seconds ...
	I0918 12:45:06.306340    5748 start.go:365] acquiring machines lock for default-k8s-diff-port-884000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:06.306721    5748 start.go:369] acquired machines lock for "default-k8s-diff-port-884000" in 311.291µs
	I0918 12:45:06.306800    5748 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:45:06.306818    5748 fix.go:54] fixHost starting: 
	I0918 12:45:06.307501    5748 fix.go:102] recreateIfNeeded on default-k8s-diff-port-884000: state=Stopped err=<nil>
	W0918 12:45:06.307527    5748 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:45:06.317991    5748 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-884000" ...
	I0918 12:45:06.330092    5748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b4:ed:10:59:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/default-k8s-diff-port-884000/disk.qcow2
	I0918 12:45:06.339057    5748 main.go:141] libmachine: STDOUT: 
	I0918 12:45:06.339121    5748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:06.339219    5748 fix.go:56] fixHost completed within 32.398875ms
	I0918 12:45:06.339235    5748 start.go:83] releasing machines lock for "default-k8s-diff-port-884000", held for 32.494083ms
	W0918 12:45:06.339408    5748 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-884000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-884000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:06.347891    5748 out.go:177] 
	W0918 12:45:06.351911    5748 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:06.351978    5748 out.go:239] * 
	* 
	W0918 12:45:06.354237    5748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:45:06.363891    5748 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (63.03425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-330000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (30.469167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-330000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-330000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-330000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.940667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-330000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-330000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.248583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-330000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-330000 "sudo crictl images -o json": exit status 89 (37.483333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-330000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-330000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-330000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (26.905375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-330000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-330000 --alsologtostderr -v=1: exit status 89 (39.633833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-330000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:45:03.419395    5767 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:45:03.419543    5767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:03.419546    5767 out.go:309] Setting ErrFile to fd 2...
	I0918 12:45:03.419549    5767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:03.419675    5767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:45:03.419908    5767 out.go:303] Setting JSON to false
	I0918 12:45:03.419918    5767 mustload.go:65] Loading cluster: embed-certs-330000
	I0918 12:45:03.420099    5767 config.go:182] Loaded profile config "embed-certs-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:45:03.424687    5767 out.go:177] * The control plane node must be running for this command
	I0918 12:45:03.428920    5767 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-330000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-330000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.728917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.058541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-363000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-363000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (9.825665792s)

                                                
                                                
-- stdout --
	* [newest-cni-363000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-363000 in cluster newest-cni-363000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:45:03.878870    5790 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:45:03.878999    5790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:03.879002    5790 out.go:309] Setting ErrFile to fd 2...
	I0918 12:45:03.879004    5790 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:03.879127    5790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:45:03.880268    5790 out.go:303] Setting JSON to false
	I0918 12:45:03.894851    5790 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4477,"bootTime":1695061826,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:45:03.894920    5790 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:45:03.898433    5790 out.go:177] * [newest-cni-363000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:45:03.903449    5790 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:45:03.903493    5790 notify.go:220] Checking for updates...
	I0918 12:45:03.907416    5790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:45:03.910360    5790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:45:03.913346    5790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:45:03.916368    5790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:45:03.917700    5790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:45:03.920710    5790 config.go:182] Loaded profile config "default-k8s-diff-port-884000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:45:03.920754    5790 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:45:03.925368    5790 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:45:03.930275    5790 start.go:298] selected driver: qemu2
	I0918 12:45:03.930281    5790 start.go:902] validating driver "qemu2" against <nil>
	I0918 12:45:03.930286    5790 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:45:03.932195    5790 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0918 12:45:03.932217    5790 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0918 12:45:03.940195    5790 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:45:03.943412    5790 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0918 12:45:03.943432    5790 cni.go:84] Creating CNI manager for ""
	I0918 12:45:03.943438    5790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:45:03.943442    5790 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:45:03.943446    5790 start_flags.go:321] config:
	{Name:newest-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-363000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:45:03.947394    5790 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:45:03.954329    5790 out.go:177] * Starting control plane node newest-cni-363000 in cluster newest-cni-363000
	I0918 12:45:03.958363    5790 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:45:03.958379    5790 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:45:03.958386    5790 cache.go:57] Caching tarball of preloaded images
	I0918 12:45:03.958438    5790 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:45:03.958449    5790 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:45:03.958507    5790 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/newest-cni-363000/config.json ...
	I0918 12:45:03.958525    5790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/newest-cni-363000/config.json: {Name:mk87a388599791f8a5cff479f50ca514e27a9f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:45:03.958744    5790 start.go:365] acquiring machines lock for newest-cni-363000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:03.958784    5790 start.go:369] acquired machines lock for "newest-cni-363000" in 33.875µs
	I0918 12:45:03.958796    5790 start.go:93] Provisioning new machine with config: &{Name:newest-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-363000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:45:03.958828    5790 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:45:03.967333    5790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:45:03.982446    5790 start.go:159] libmachine.API.Create for "newest-cni-363000" (driver="qemu2")
	I0918 12:45:03.982480    5790 client.go:168] LocalClient.Create starting
	I0918 12:45:03.982545    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:45:03.982575    5790 main.go:141] libmachine: Decoding PEM data...
	I0918 12:45:03.982591    5790 main.go:141] libmachine: Parsing certificate...
	I0918 12:45:03.982629    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:45:03.982646    5790 main.go:141] libmachine: Decoding PEM data...
	I0918 12:45:03.982654    5790 main.go:141] libmachine: Parsing certificate...
	I0918 12:45:03.982950    5790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:45:04.103918    5790 main.go:141] libmachine: Creating SSH key...
	I0918 12:45:04.286221    5790 main.go:141] libmachine: Creating Disk image...
	I0918 12:45:04.286236    5790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:45:04.286397    5790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:04.295370    5790 main.go:141] libmachine: STDOUT: 
	I0918 12:45:04.295382    5790 main.go:141] libmachine: STDERR: 
	I0918 12:45:04.295431    5790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2 +20000M
	I0918 12:45:04.302523    5790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:45:04.302545    5790 main.go:141] libmachine: STDERR: 
	I0918 12:45:04.302556    5790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:04.302564    5790 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:45:04.302610    5790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f0:4f:6b:c0:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:04.304106    5790 main.go:141] libmachine: STDOUT: 
	I0918 12:45:04.304118    5790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:04.304138    5790 client.go:171] LocalClient.Create took 321.657625ms
	I0918 12:45:06.306278    5790 start.go:128] duration metric: createHost completed in 2.34747075s
	I0918 12:45:06.306341    5790 start.go:83] releasing machines lock for "newest-cni-363000", held for 2.347590791s
	W0918 12:45:06.306398    5790 start.go:688] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:06.325939    5790 out.go:177] * Deleting "newest-cni-363000" in qemu2 ...
	W0918 12:45:06.375026    5790 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:06.375068    5790 start.go:703] Will try again in 5 seconds ...
	I0918 12:45:11.375642    5790 start.go:365] acquiring machines lock for newest-cni-363000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:11.376210    5790 start.go:369] acquired machines lock for "newest-cni-363000" in 451.125µs
	I0918 12:45:11.376340    5790 start.go:93] Provisioning new machine with config: &{Name:newest-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-363000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:45:11.376614    5790 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:45:11.382262    5790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 12:45:11.432689    5790 start.go:159] libmachine.API.Create for "newest-cni-363000" (driver="qemu2")
	I0918 12:45:11.432759    5790 client.go:168] LocalClient.Create starting
	I0918 12:45:11.432909    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/ca.pem
	I0918 12:45:11.432974    5790 main.go:141] libmachine: Decoding PEM data...
	I0918 12:45:11.432997    5790 main.go:141] libmachine: Parsing certificate...
	I0918 12:45:11.433071    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17263-1251/.minikube/certs/cert.pem
	I0918 12:45:11.433112    5790 main.go:141] libmachine: Decoding PEM data...
	I0918 12:45:11.433130    5790 main.go:141] libmachine: Parsing certificate...
	I0918 12:45:11.433673    5790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso...
	I0918 12:45:11.563946    5790 main.go:141] libmachine: Creating SSH key...
	I0918 12:45:11.618229    5790 main.go:141] libmachine: Creating Disk image...
	I0918 12:45:11.618238    5790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:45:11.618378    5790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:11.626880    5790 main.go:141] libmachine: STDOUT: 
	I0918 12:45:11.626896    5790 main.go:141] libmachine: STDERR: 
	I0918 12:45:11.626966    5790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2 +20000M
	I0918 12:45:11.634169    5790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:45:11.634184    5790 main.go:141] libmachine: STDERR: 
	I0918 12:45:11.634192    5790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:11.634198    5790 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:45:11.634241    5790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:9c:a0:77:66:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:11.635714    5790 main.go:141] libmachine: STDOUT: 
	I0918 12:45:11.635727    5790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:11.635741    5790 client.go:171] LocalClient.Create took 202.980167ms
	I0918 12:45:13.637867    5790 start.go:128] duration metric: createHost completed in 2.261269791s
	I0918 12:45:13.637958    5790 start.go:83] releasing machines lock for "newest-cni-363000", held for 2.261732792s
	W0918 12:45:13.638381    5790 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:13.648972    5790 out.go:177] 
	W0918 12:45:13.654073    5790 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:13.654107    5790 out.go:239] * 
	* 
	W0918 12:45:13.656882    5790 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:45:13.667029    5790 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-363000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000: exit status 7 (64.371208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-884000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (30.949166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-884000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-884000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.716084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-884000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-884000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (27.969958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-884000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-884000 "sudo crictl images -o json": exit status 89 (38.52525ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-884000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-884000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-884000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (26.818458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-884000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-884000 --alsologtostderr -v=1: exit status 89 (39.625125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-884000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:45:06.619415    5816 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:45:06.619565    5816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:06.619568    5816 out.go:309] Setting ErrFile to fd 2...
	I0918 12:45:06.619570    5816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:06.619692    5816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:45:06.619924    5816 out.go:303] Setting JSON to false
	I0918 12:45:06.619934    5816 mustload.go:65] Loading cluster: default-k8s-diff-port-884000
	I0918 12:45:06.620125    5816 config.go:182] Loaded profile config "default-k8s-diff-port-884000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:45:06.624742    5816 out.go:177] * The control plane node must be running for this command
	I0918 12:45:06.628938    5816 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-884000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-884000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (27.536792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (27.023542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-884000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-363000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-363000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2: exit status 80 (5.175012792s)

                                                
                                                
-- stdout --
	* [newest-cni-363000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-363000 in cluster newest-cni-363000
	* Restarting existing qemu2 VM for "newest-cni-363000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-363000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:45:13.990437    5855 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:45:13.990548    5855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:13.990551    5855 out.go:309] Setting ErrFile to fd 2...
	I0918 12:45:13.990553    5855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:13.990684    5855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:45:13.991681    5855 out.go:303] Setting JSON to false
	I0918 12:45:14.006548    5855 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4487,"bootTime":1695061826,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:45:14.006613    5855 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:45:14.010019    5855 out.go:177] * [newest-cni-363000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:45:14.016860    5855 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:45:14.020863    5855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:45:14.016974    5855 notify.go:220] Checking for updates...
	I0918 12:45:14.026854    5855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:45:14.029891    5855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:45:14.032852    5855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:45:14.035796    5855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:45:14.039082    5855 config.go:182] Loaded profile config "newest-cni-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:45:14.039361    5855 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:45:14.043865    5855 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:45:14.050881    5855 start.go:298] selected driver: qemu2
	I0918 12:45:14.050888    5855 start.go:902] validating driver "qemu2" against &{Name:newest-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-363000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:45:14.050968    5855 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:45:14.053059    5855 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0918 12:45:14.053082    5855 cni.go:84] Creating CNI manager for ""
	I0918 12:45:14.053089    5855 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:45:14.053094    5855 start_flags.go:321] config:
	{Name:newest-cni-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-363000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:45:14.057207    5855 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:45:14.062816    5855 out.go:177] * Starting control plane node newest-cni-363000 in cluster newest-cni-363000
	I0918 12:45:14.066853    5855 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 12:45:14.066868    5855 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 12:45:14.066878    5855 cache.go:57] Caching tarball of preloaded images
	I0918 12:45:14.066927    5855 preload.go:174] Found /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:45:14.066936    5855 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 12:45:14.066989    5855 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/newest-cni-363000/config.json ...
	I0918 12:45:14.067285    5855 start.go:365] acquiring machines lock for newest-cni-363000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:14.067310    5855 start.go:369] acquired machines lock for "newest-cni-363000" in 19.375µs
	I0918 12:45:14.067321    5855 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:45:14.067324    5855 fix.go:54] fixHost starting: 
	I0918 12:45:14.067435    5855 fix.go:102] recreateIfNeeded on newest-cni-363000: state=Stopped err=<nil>
	W0918 12:45:14.067442    5855 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:45:14.071842    5855 out.go:177] * Restarting existing qemu2 VM for "newest-cni-363000" ...
	I0918 12:45:14.079814    5855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:9c:a0:77:66:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:14.081545    5855 main.go:141] libmachine: STDOUT: 
	I0918 12:45:14.081564    5855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:14.081591    5855 fix.go:56] fixHost completed within 14.265416ms
	I0918 12:45:14.081596    5855 start.go:83] releasing machines lock for "newest-cni-363000", held for 14.282083ms
	W0918 12:45:14.081601    5855 start.go:688] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:14.081631    5855 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:14.081636    5855 start.go:703] Will try again in 5 seconds ...
	I0918 12:45:19.083789    5855 start.go:365] acquiring machines lock for newest-cni-363000: {Name:mk4b1be517888c87845a1fd1bcf71e9e6d2854bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:45:19.084158    5855 start.go:369] acquired machines lock for "newest-cni-363000" in 291.084µs
	I0918 12:45:19.084302    5855 start.go:96] Skipping create...Using existing machine configuration
	I0918 12:45:19.084322    5855 fix.go:54] fixHost starting: 
	I0918 12:45:19.085028    5855 fix.go:102] recreateIfNeeded on newest-cni-363000: state=Stopped err=<nil>
	W0918 12:45:19.085052    5855 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 12:45:19.093419    5855 out.go:177] * Restarting existing qemu2 VM for "newest-cni-363000" ...
	I0918 12:45:19.096690    5855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:9c:a0:77:66:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17263-1251/.minikube/machines/newest-cni-363000/disk.qcow2
	I0918 12:45:19.105446    5855 main.go:141] libmachine: STDOUT: 
	I0918 12:45:19.105519    5855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 12:45:19.105603    5855 fix.go:56] fixHost completed within 21.282375ms
	I0918 12:45:19.105627    5855 start.go:83] releasing machines lock for "newest-cni-363000", held for 21.447333ms
	W0918 12:45:19.105856    5855 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-363000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-363000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 12:45:19.113457    5855 out.go:177] 
	W0918 12:45:19.117525    5855 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 12:45:19.117547    5855 out.go:239] * 
	* 
	W0918 12:45:19.120000    5855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:45:19.127293    5855 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-363000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000: exit status 7 (67.397666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-363000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-363000 "sudo crictl images -o json": exit status 89 (44.038666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-363000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-363000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-363000"
start_stop_delete_test.go:304: v1.28.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.2",
- 	"registry.k8s.io/kube-controller-manager:v1.28.2",
- 	"registry.k8s.io/kube-proxy:v1.28.2",
- 	"registry.k8s.io/kube-scheduler:v1.28.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000: exit status 7 (28.046459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-363000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-363000 --alsologtostderr -v=1: exit status 89 (38.665ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-363000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:45:19.307648    5873 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:45:19.307796    5873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:19.307799    5873 out.go:309] Setting ErrFile to fd 2...
	I0918 12:45:19.307801    5873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:45:19.307916    5873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:45:19.308143    5873 out.go:303] Setting JSON to false
	I0918 12:45:19.308152    5873 mustload.go:65] Loading cluster: newest-cni-363000
	I0918 12:45:19.308343    5873 config.go:182] Loaded profile config "newest-cni-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:45:19.311723    5873 out.go:177] * The control plane node must be running for this command
	I0918 12:45:19.315703    5873 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-363000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-363000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000: exit status 7 (27.942458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-363000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000: exit status 7 (28.118542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/260)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.2/json-events 14.57
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
19 TestBinaryMirror 0.36
22 TestAddons/Setup 403.62
26 TestAddons/parallel/InspektorGadget 10.22
27 TestAddons/parallel/MetricsServer 5.25
30 TestAddons/parallel/CSI 51.24
31 TestAddons/parallel/Headlamp 11.58
35 TestAddons/serial/GCPAuth/Namespaces 0.07
36 TestAddons/StoppedEnableDisable 12.26
44 TestHyperKitDriverInstallOrUpdate 8.18
47 TestErrorSpam/setup 29.76
48 TestErrorSpam/start 0.34
49 TestErrorSpam/status 0.25
50 TestErrorSpam/pause 0.68
51 TestErrorSpam/unpause 0.62
52 TestErrorSpam/stop 3.23
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 45.9
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 32.98
59 TestFunctional/serial/KubeContext 0.03
60 TestFunctional/serial/KubectlGetPods 0.04
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.7
64 TestFunctional/serial/CacheCmd/cache/add_local 1.26
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
66 TestFunctional/serial/CacheCmd/cache/list 0.03
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.98
69 TestFunctional/serial/CacheCmd/cache/delete 0.06
70 TestFunctional/serial/MinikubeKubectlCmd 0.4
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.53
72 TestFunctional/serial/ExtraConfig 38.44
73 TestFunctional/serial/ComponentHealth 0.04
74 TestFunctional/serial/LogsCmd 0.63
75 TestFunctional/serial/LogsFileCmd 0.6
76 TestFunctional/serial/InvalidService 4.62
78 TestFunctional/parallel/ConfigCmd 0.2
79 TestFunctional/parallel/DashboardCmd 10.16
80 TestFunctional/parallel/DryRun 0.22
81 TestFunctional/parallel/InternationalLanguage 0.12
82 TestFunctional/parallel/StatusCmd 0.27
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 25.03
90 TestFunctional/parallel/SSHCmd 0.14
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.45
98 TestFunctional/parallel/NodeLabels 0.05
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
102 TestFunctional/parallel/License 0.2
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.12
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
112 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
114 TestFunctional/parallel/ServiceCmd/DeployApp 7.11
115 TestFunctional/parallel/ServiceCmd/List 0.29
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
118 TestFunctional/parallel/ServiceCmd/Format 0.11
119 TestFunctional/parallel/ServiceCmd/URL 0.11
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
121 TestFunctional/parallel/ProfileCmd/profile_list 0.15
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
123 TestFunctional/parallel/MountCmd/any-port 4.02
124 TestFunctional/parallel/MountCmd/specific-port 0.89
125 TestFunctional/parallel/MountCmd/VerifyCleanup 0.8
126 TestFunctional/parallel/Version/short 0.04
127 TestFunctional/parallel/Version/components 0.19
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
132 TestFunctional/parallel/ImageCommands/ImageBuild 1.82
133 TestFunctional/parallel/ImageCommands/Setup 1.77
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.21
135 TestFunctional/parallel/DockerEnv/bash 0.38
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.51
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.5
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
145 TestFunctional/delete_addon-resizer_images 0.12
146 TestFunctional/delete_my-image_image 0.04
147 TestFunctional/delete_minikube_cached_images 0.04
151 TestImageBuild/serial/Setup 28.06
152 TestImageBuild/serial/NormalBuild 1.08
154 TestImageBuild/serial/BuildWithDockerIgnore 0.13
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
158 TestIngressAddonLegacy/StartLegacyK8sCluster 72.26
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.36
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.24
165 TestJSONOutput/start/Command 43.26
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.28
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.22
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 12.08
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
197 TestMountStart/serial/StartWithMountFirst 18.18
198 TestMountStart/serial/VerifyMountFirst 0.19
199 TestMountStart/serial/StartWithMountSecond 18.35
200 TestMountStart/serial/VerifyMountSecond 0.2
201 TestMountStart/serial/DeleteFirst 0.1
205 TestMultiNode/serial/FreshStart2Nodes 100.4
206 TestMultiNode/serial/DeployApp2Nodes 3.68
207 TestMultiNode/serial/PingHostFrom2Pods 0.54
208 TestMultiNode/serial/AddNode 35.68
209 TestMultiNode/serial/ProfileList 0.18
210 TestMultiNode/serial/CopyFile 2.58
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
260 TestNoKubernetes/serial/ProfileList 0.14
261 TestNoKubernetes/serial/Stop 0.06
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
277 TestStartStop/group/old-k8s-version/serial/Stop 0.06
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
288 TestStartStop/group/no-preload/serial/Stop 0.06
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
303 TestStartStop/group/embed-certs/serial/Stop 0.06
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.08
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
323 TestStartStop/group/newest-cni/serial/Stop 0.06
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
326 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-242000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-242000: exit status 85 (96.760958ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:51 PDT |          |
	|         | -p download-only-242000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 11:51:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 11:51:43.673352    1670 out.go:296] Setting OutFile to fd 1 ...
	I0918 11:51:43.673496    1670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:51:43.673499    1670 out.go:309] Setting ErrFile to fd 2...
	I0918 11:51:43.673502    1670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:51:43.673628    1670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	W0918 11:51:43.673712    1670 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17263-1251/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17263-1251/.minikube/config/config.json: no such file or directory
	I0918 11:51:43.674814    1670 out.go:303] Setting JSON to true
	I0918 11:51:43.691096    1670 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1695061826,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 11:51:43.691153    1670 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 11:51:43.696788    1670 out.go:97] [download-only-242000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 11:51:43.700713    1670 out.go:169] MINIKUBE_LOCATION=17263
	I0918 11:51:43.696908    1670 notify.go:220] Checking for updates...
	W0918 11:51:43.696937    1670 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 11:51:43.707561    1670 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:51:43.711781    1670 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 11:51:43.714758    1670 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 11:51:43.716116    1670 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	W0918 11:51:43.721757    1670 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 11:51:43.721987    1670 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 11:51:43.727772    1670 out.go:97] Using the qemu2 driver based on user configuration
	I0918 11:51:43.727778    1670 start.go:298] selected driver: qemu2
	I0918 11:51:43.727792    1670 start.go:902] validating driver "qemu2" against <nil>
	I0918 11:51:43.727853    1670 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 11:51:43.731695    1670 out.go:169] Automatically selected the socket_vmnet network
	I0918 11:51:43.737213    1670 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0918 11:51:43.737298    1670 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 11:51:43.737361    1670 cni.go:84] Creating CNI manager for ""
	I0918 11:51:43.737378    1670 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 11:51:43.737384    1670 start_flags.go:321] config:
	{Name:download-only-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-242000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:51:43.742567    1670 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 11:51:43.745703    1670 out.go:97] Downloading VM boot image ...
	I0918 11:51:43.745736    1670 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/iso/arm64/minikube-v1.31.0-1694798110-17250-arm64.iso
	I0918 11:51:50.801506    1670 out.go:97] Starting control plane node download-only-242000 in cluster download-only-242000
	I0918 11:51:50.801529    1670 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 11:51:50.861845    1670 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 11:51:50.861853    1670 cache.go:57] Caching tarball of preloaded images
	I0918 11:51:50.862024    1670 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 11:51:50.865690    1670 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0918 11:51:50.865697    1670 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:51:50.946758    1670 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0918 11:51:59.686637    1670 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:51:59.686787    1670 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:52:00.326560    1670 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0918 11:52:00.326753    1670 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/download-only-242000/config.json ...
	I0918 11:52:00.326771    1670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/download-only-242000/config.json: {Name:mk2d38f7178624dd8e5685d2e554cb81270be80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 11:52:00.327027    1670 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0918 11:52:00.327185    1670 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0918 11:52:00.800545    1670 out.go:169] 
	W0918 11:52:00.804618    1670 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0 0x1083497e0] Decompressors:map[bz2:0x1400013cdd0 gz:0x1400013cdd8 tar:0x1400013cd10 tar.bz2:0x1400013cd20 tar.gz:0x1400013cd30 tar.xz:0x1400013cd50 tar.zst:0x1400013cd60 tbz2:0x1400013cd20 tgz:0x1400013cd30 txz:0x1400013cd50 tzst:0x1400013cd60 xz:0x1400013cde0 zip:0x1400013ce20 zst:0x1400013cde8] Getters:map[file:0x14000cbcdb0 http:0x1400017e910 https:0x1400017e960] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0918 11:52:00.804643    1670 out_reason.go:110] 
	W0918 11:52:00.809566    1670 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 11:52:00.813560    1670 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-242000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (14.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-242000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-242000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=qemu2 : (14.572358833s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (14.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-242000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-242000: exit status 85 (82.151125ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:51 PDT |          |
	|         | -p download-only-242000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-242000 | jenkins | v1.31.2 | 18 Sep 23 11:52 PDT |          |
	|         | -p download-only-242000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 11:52:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 11:52:00.998726    1682 out.go:296] Setting OutFile to fd 1 ...
	I0918 11:52:00.998862    1682 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:00.998865    1682 out.go:309] Setting ErrFile to fd 2...
	I0918 11:52:00.998867    1682 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 11:52:00.998981    1682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	W0918 11:52:00.999045    1682 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17263-1251/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17263-1251/.minikube/config/config.json: no such file or directory
	I0918 11:52:00.999934    1682 out.go:303] Setting JSON to true
	I0918 11:52:01.014961    1682 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1295,"bootTime":1695061826,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 11:52:01.015038    1682 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 11:52:01.019457    1682 out.go:97] [download-only-242000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 11:52:01.023249    1682 out.go:169] MINIKUBE_LOCATION=17263
	I0918 11:52:01.019565    1682 notify.go:220] Checking for updates...
	I0918 11:52:01.031277    1682 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 11:52:01.034274    1682 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 11:52:01.040232    1682 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 11:52:01.048139    1682 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	W0918 11:52:01.055318    1682 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 11:52:01.055652    1682 config.go:182] Loaded profile config "download-only-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0918 11:52:01.055688    1682 start.go:810] api.Load failed for download-only-242000: filestore "download-only-242000": Docker machine "download-only-242000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0918 11:52:01.055734    1682 driver.go:373] Setting default libvirt URI to qemu:///system
	W0918 11:52:01.055750    1682 start.go:810] api.Load failed for download-only-242000: filestore "download-only-242000": Docker machine "download-only-242000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0918 11:52:01.057202    1682 out.go:97] Using the qemu2 driver based on existing profile
	I0918 11:52:01.057209    1682 start.go:298] selected driver: qemu2
	I0918 11:52:01.057212    1682 start.go:902] validating driver "qemu2" against &{Name:download-only-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-242000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:52:01.059196    1682 cni.go:84] Creating CNI manager for ""
	I0918 11:52:01.059211    1682 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 11:52:01.059218    1682 start_flags.go:321] config:
	{Name:download-only-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-242000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 11:52:01.063158    1682 iso.go:125] acquiring lock: {Name:mk34e23c181861c65264ec384f06e6cfa001aa08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 11:52:01.066301    1682 out.go:97] Starting control plane node download-only-242000 in cluster download-only-242000
	I0918 11:52:01.066310    1682 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:01.127856    1682 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 11:52:01.127867    1682 cache.go:57] Caching tarball of preloaded images
	I0918 11:52:01.128048    1682 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:01.133170    1682 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0918 11:52:01.133178    1682 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:52:01.212021    1682 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4?checksum=md5:48f32a2a1ca4194a6d2a21c3ded2b2db -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4
	I0918 11:52:10.273396    1682 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:52:10.273546    1682 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-arm64.tar.lz4 ...
	I0918 11:52:10.853278    1682 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0918 11:52:10.853340    1682 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/download-only-242000/config.json ...
	I0918 11:52:10.853579    1682 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0918 11:52:10.853739    1682 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17263-1251/.minikube/cache/darwin/arm64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-242000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-242000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-077000 --alsologtostderr --binary-mirror http://127.0.0.1:49414 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-077000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-077000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/Setup (403.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-221000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-221000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m43.618339292s)
--- PASS: TestAddons/Setup (403.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2qmph" [7eea1399-50e6-40e6-8424-bacd2f982bff] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007715125s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-221000
addons_test.go:817: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-221000: (5.212130208s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 2.186084ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-ph4qt" [193ff040-48cb-4429-8aa3-4065ecb5241d] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009975291s
addons_test.go:391: (dbg) Run:  kubectl --context addons-221000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p addons-221000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 2.50875ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-221000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-221000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [69af6c7a-1237-41cc-9383-4b459d3093c9] Pending
helpers_test.go:344: "task-pv-pod" [69af6c7a-1237-41cc-9383-4b459d3093c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [69af6c7a-1237-41cc-9383-4b459d3093c9] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00993725s
addons_test.go:560: (dbg) Run:  kubectl --context addons-221000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-221000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-221000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-221000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-221000 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-221000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-221000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-221000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-221000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b6b39892-d503-4b1d-8d31-2a3047f0e644] Pending
helpers_test.go:344: "task-pv-pod-restore" [b6b39892-d503-4b1d-8d31-2a3047f0e644] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b6b39892-d503-4b1d-8d31-2a3047f0e644] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008511s
addons_test.go:602: (dbg) Run:  kubectl --context addons-221000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-221000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-221000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-arm64 -p addons-221000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-arm64 -p addons-221000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.098697958s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-arm64 -p addons-221000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-221000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-6fg5z" [9f170bf9-6571-4a5b-b6dc-c576dbc15188] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-6fg5z" [9f170bf9-6571-4a5b-b6dc-c576dbc15188] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.007343958s
--- PASS: TestAddons/parallel/Headlamp (11.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-221000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-221000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-221000
addons_test.go:148: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-221000: (12.083151458s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-221000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-221000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-221000
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.18s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.18s)

                                                
                                    
x
+
TestErrorSpam/setup (29.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-406000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-406000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 --driver=qemu2 : (29.764544875s)
--- PASS: TestErrorSpam/setup (29.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 stop: (3.067710459s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-406000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-406000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17263-1251/.minikube/files/etc/test/nested/copy/1668/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-847000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0918 12:14:00.294642    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.301404    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.313436    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.335498    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.377554    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.459617    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.621699    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:00.943814    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:01.586020    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:02.868167    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:05.430279    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:10.551503    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-847000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.896145584s)
--- PASS: TestFunctional/serial/StartWithProxy (45.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-847000 --alsologtostderr -v=8
E0918 12:14:20.793691    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
E0918 12:14:41.275708    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-847000 --alsologtostderr -v=8: (32.977351792s)
functional_test.go:659: soft start took 32.977775208s for "functional-847000" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-847000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 cache add registry.k8s.io/pause:3.1: (1.3764085s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 cache add registry.k8s.io/pause:3.3: (1.2188395s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 cache add registry.k8s.io/pause:latest: (1.10210875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2980510426/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cache add minikube-local-cache-test:functional-847000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cache delete minikube-local-cache-test:functional-847000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-847000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.001041ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 kubectl -- --context functional-847000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-847000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-847000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0918 12:15:22.235693    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-847000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.4358165s)
functional_test.go:757: restart took 38.435939375s for "functional-847000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-847000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd4141082382/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-847000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-847000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-847000: exit status 115 (111.57075ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31563 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-847000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-847000 delete -f testdata/invalidsvc.yaml: (1.373046208s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 config get cpus: exit status 14 (27.836792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 config get cpus: exit status 14 (27.548083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-847000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-847000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2577: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-847000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-847000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.77325ms)

                                                
                                                
-- stdout --
	* [functional-847000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:16:18.595186    2564 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:16:18.595298    2564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:18.595302    2564 out.go:309] Setting ErrFile to fd 2...
	I0918 12:16:18.595305    2564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:18.595423    2564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:16:18.596432    2564 out.go:303] Setting JSON to false
	I0918 12:16:18.611943    2564 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2752,"bootTime":1695061826,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:16:18.612044    2564 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:16:18.616237    2564 out.go:177] * [functional-847000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0918 12:16:18.623164    2564 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:16:18.627193    2564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:16:18.623225    2564 notify.go:220] Checking for updates...
	I0918 12:16:18.633140    2564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:16:18.636189    2564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:16:18.639205    2564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:16:18.642188    2564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:16:18.645456    2564 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:16:18.645700    2564 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:16:18.650177    2564 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 12:16:18.657153    2564 start.go:298] selected driver: qemu2
	I0918 12:16:18.657159    2564 start.go:902] validating driver "qemu2" against &{Name:functional-847000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-847000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:16:18.657204    2564 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:16:18.662255    2564 out.go:177] 
	W0918 12:16:18.666171    2564 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 12:16:18.670204    2564 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-847000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-847000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-847000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.984625ms)

                                                
                                                
-- stdout --
	* [functional-847000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:16:18.478243    2560 out.go:296] Setting OutFile to fd 1 ...
	I0918 12:16:18.478371    2560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:18.478374    2560 out.go:309] Setting ErrFile to fd 2...
	I0918 12:16:18.478377    2560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 12:16:18.478504    2560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
	I0918 12:16:18.479893    2560 out.go:303] Setting JSON to false
	I0918 12:16:18.497351    2560 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2752,"bootTime":1695061826,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0918 12:16:18.497457    2560 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0918 12:16:18.503195    2560 out.go:177] * [functional-847000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0918 12:16:18.511236    2560 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 12:16:18.515154    2560 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	I0918 12:16:18.511284    2560 notify.go:220] Checking for updates...
	I0918 12:16:18.520406    2560 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:16:18.527196    2560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:16:18.528555    2560 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	I0918 12:16:18.531153    2560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:16:18.535474    2560 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0918 12:16:18.535719    2560 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 12:16:18.539155    2560 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0918 12:16:18.546221    2560 start.go:298] selected driver: qemu2
	I0918 12:16:18.546226    2560 start.go:902] validating driver "qemu2" against &{Name:functional-847000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17250/minikube-v1.31.0-1694798110-17250-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:functional-847000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 12:16:18.546273    2560 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:16:18.552193    2560 out.go:177] 
	W0918 12:16:18.556228    2560 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 12:16:18.560191    2560 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b645efc2-70a2-4a16-87f6-9de7bfcd38b5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011565709s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-847000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-847000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-847000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-847000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ac6f3e94-046c-45aa-97a1-f21587fda138] Pending
helpers_test.go:344: "sp-pod" [ac6f3e94-046c-45aa-97a1-f21587fda138] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ac6f3e94-046c-45aa-97a1-f21587fda138] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008550583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-847000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-847000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-847000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bee9ed0-6da0-40b7-ab20-25bd903e52f9] Pending
helpers_test.go:344: "sp-pod" [8bee9ed0-6da0-40b7-ab20-25bd903e52f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8bee9ed0-6da0-40b7-ab20-25bd903e52f9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008424458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-847000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh -n functional-847000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 cp functional-847000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2738861763/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh -n functional-847000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1668/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /etc/test/nested/copy/1668/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1668.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /etc/ssl/certs/1668.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1668.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /usr/share/ca-certificates/1668.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /etc/ssl/certs/16682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /usr/share/ca-certificates/16682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-847000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh "sudo systemctl is-active crio": exit status 1 (69.077833ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-847000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-847000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-847000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-847000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2421: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-847000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-847000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [38baa663-ba19-4064-9f8f-b4bbeaacb51e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [38baa663-ba19-4064-9f8f-b4bbeaacb51e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.006793167s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-847000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.79.9 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-847000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-847000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-847000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-8g5cr" [751e17c6-7b8b-4bf6-bb0d-97f745627ff3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-8g5cr" [751e17c6-7b8b-4bf6-bb0d-97f745627ff3] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.0092505s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 service list -o json
functional_test.go:1493: Took "284.856541ms" to run "out/minikube-darwin-arm64 -p functional-847000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:30199
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:30199
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "118.131083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "31.653291ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "118.250167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "32.487333ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2406786753/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695064572470330000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2406786753/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695064572470330000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2406786753/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695064572470330000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2406786753/001/test-1695064572470330000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.186958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 19:16 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 19:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 19:16 test-1695064572470330000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh cat /mount-9p/test-1695064572470330000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-847000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ebf89895-9ac9-4a77-ad15-d68c1c62a5c2] Pending
helpers_test.go:344: "busybox-mount" [ebf89895-9ac9-4a77-ad15-d68c1c62a5c2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ebf89895-9ac9-4a77-ad15-d68c1c62a5c2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ebf89895-9ac9-4a77-ad15-d68c1c62a5c2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.007293541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-847000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2406786753/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1700026312/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.14325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1700026312/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh "sudo umount -f /mount-9p": exit status 1 (68.1265ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-847000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1700026312/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T" /mount1: exit status 1 (72.523541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-847000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-847000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup817082744/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-847000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-847000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-847000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-847000 image ls --format short --alsologtostderr:
I0918 12:16:40.167873    2751 out.go:296] Setting OutFile to fd 1 ...
I0918 12:16:40.168051    2751 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.168054    2751 out.go:309] Setting ErrFile to fd 2...
I0918 12:16:40.168057    2751 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.168201    2751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
I0918 12:16:40.168669    2751 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.168734    2751 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.169904    2751 ssh_runner.go:195] Run: systemctl --version
I0918 12:16:40.169912    2751 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/functional-847000/id_rsa Username:docker}
I0918 12:16:40.205507    2751 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-847000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-847000 | 6cc66fec3497d | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 89d57b83c1786 | 116MB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | 7da62c127fc0f | 68.3MB |
| docker.io/library/nginx                     | latest            | 91582cfffc2d0 | 192MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/google-containers/addon-resizer      | functional-847000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.28.2           | 30bb499447fe1 | 120MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 64fc40cee3716 | 57.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-847000 image ls --format table --alsologtostderr:
I0918 12:16:40.330357    2759 out.go:296] Setting OutFile to fd 1 ...
I0918 12:16:40.330497    2759 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.330500    2759 out.go:309] Setting ErrFile to fd 2...
I0918 12:16:40.330503    2759 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.330627    2759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
I0918 12:16:40.331079    2759 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.331141    2759 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.331919    2759 ssh_runner.go:195] Run: systemctl --version
I0918 12:16:40.331928    2759 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/functional-847000/id_rsa Username:docker}
I0918 12:16:40.366707    2759 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-847000 image ls --format json --alsologtostderr:
[{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"120000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9
181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"6cc66fec3497d6777558962a12c41ac0ba6d9bfef23c48c17885f38bad2dfaa4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-847000"],"size":"30"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"57800000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k
8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-847000"],"size":"32900000"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"68300000"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"116000000"},{"id":"91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u
003e"],"size":"42300000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-847000 image ls --format json --alsologtostderr:
I0918 12:16:40.250126    2755 out.go:296] Setting OutFile to fd 1 ...
I0918 12:16:40.250744    2755 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.250750    2755 out.go:309] Setting ErrFile to fd 2...
I0918 12:16:40.250754    2755 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.250908    2755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
I0918 12:16:40.251437    2755 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.251497    2755 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.252481    2755 ssh_runner.go:195] Run: systemctl --version
I0918 12:16:40.252491    2755 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/functional-847000/id_rsa Username:docker}
I0918 12:16:40.287348    2755 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-847000 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "68300000"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "57800000"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-847000
size: "32900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "116000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 6cc66fec3497d6777558962a12c41ac0ba6d9bfef23c48c17885f38bad2dfaa4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-847000
size: "30"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "120000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-847000 image ls --format yaml --alsologtostderr:
I0918 12:16:40.167871    2750 out.go:296] Setting OutFile to fd 1 ...
I0918 12:16:40.168062    2750 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.168065    2750 out.go:309] Setting ErrFile to fd 2...
I0918 12:16:40.168068    2750 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.168227    2750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
I0918 12:16:40.168644    2750 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.168709    2750 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.169532    2750 ssh_runner.go:195] Run: systemctl --version
I0918 12:16:40.169542    2750 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/functional-847000/id_rsa Username:docker}
I0918 12:16:40.204593    2750 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-847000 ssh pgrep buildkitd: exit status 1 (68.232291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image build -t localhost/my-image:functional-847000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 image build -t localhost/my-image:functional-847000 testdata/build --alsologtostderr: (1.672936542s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-847000 image build -t localhost/my-image:functional-847000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 6e4e617fc903
Removing intermediate container 6e4e617fc903
---> 0589ad14efe1
Step 3/3 : ADD content.txt /
---> 16d63a085bfb
Successfully built 16d63a085bfb
Successfully tagged localhost/my-image:functional-847000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-847000 image build -t localhost/my-image:functional-847000 testdata/build --alsologtostderr:
I0918 12:16:40.317141    2758 out.go:296] Setting OutFile to fd 1 ...
I0918 12:16:40.317368    2758 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.317372    2758 out.go:309] Setting ErrFile to fd 2...
I0918 12:16:40.317374    2758 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 12:16:40.317503    2758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17263-1251/.minikube/bin
I0918 12:16:40.317974    2758 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.318549    2758 config.go:182] Loaded profile config "functional-847000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0918 12:16:40.319461    2758 ssh_runner.go:195] Run: systemctl --version
I0918 12:16:40.319474    2758 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17263-1251/.minikube/machines/functional-847000/id_rsa Username:docker}
I0918 12:16:40.353633    2758 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3687443164.tar
I0918 12:16:40.353691    2758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 12:16:40.356672    2758 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3687443164.tar
I0918 12:16:40.358243    2758 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3687443164.tar: stat -c "%s %y" /var/lib/minikube/build/build.3687443164.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3687443164.tar': No such file or directory
I0918 12:16:40.358261    2758 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3687443164.tar --> /var/lib/minikube/build/build.3687443164.tar (3072 bytes)
I0918 12:16:40.365752    2758 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3687443164
I0918 12:16:40.368842    2758 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3687443164 -xf /var/lib/minikube/build/build.3687443164.tar
I0918 12:16:40.376224    2758 docker.go:339] Building image: /var/lib/minikube/build/build.3687443164
I0918 12:16:40.376290    2758 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-847000 /var/lib/minikube/build/build.3687443164
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0918 12:16:41.949101    2758 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-847000 /var/lib/minikube/build/build.3687443164: (1.572816542s)
I0918 12:16:41.949165    2758 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3687443164
I0918 12:16:41.952221    2758 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3687443164.tar
I0918 12:16:41.954874    2758 build_images.go:207] Built localhost/my-image:functional-847000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3687443164.tar
I0918 12:16:41.954888    2758 build_images.go:123] succeeded building to: functional-847000
I0918 12:16:41.954890    2758 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.653637167s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-847000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image load --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 image load --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr: (2.128547084s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-847000 docker-env) && out/minikube-darwin-arm64 status -p functional-847000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-847000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image load --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 image load --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr: (1.434371333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.52801175s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-847000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image load --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-847000 image load --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr: (1.825810875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image save gcr.io/google-containers/addon-resizer:functional-847000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image rm gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-847000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-847000 image save --daemon gcr.io/google-containers/addon-resizer:functional-847000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-847000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-847000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-847000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-847000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 
E0918 12:16:44.156836    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/addons-221000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 : (28.06290825s)
--- PASS: TestImageBuild/serial/Setup (28.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-438000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-438000: (1.077393958s)
--- PASS: TestImageBuild/serial/NormalBuild (1.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-438000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-438000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (72.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-356000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-356000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m12.261831667s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (72.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons enable ingress --alsologtostderr -v=5: (13.358380208s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-356000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-895000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-895000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (43.259535375s)
--- PASS: TestJSONOutput/start/Command (43.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-895000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-895000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-895000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-895000 --output=json --user=testUser: (12.078791125s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-445000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-445000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.874209ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d37da8b5-3149-4844-95d8-48848f498e55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-445000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e8af37a-b3a9-4889-9579-80fab239e259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17263"}}
	{"specversion":"1.0","id":"e904b2e3-8b86-4c35-a0e6-8fcfec198ab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig"}}
	{"specversion":"1.0","id":"6a79a50a-6346-4664-982a-56922eb10c1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"976a3651-1e14-4ddb-a5ce-29b3416b06ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1a23e0a-458a-443b-9a61-316f51480fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube"}}
	{"specversion":"1.0","id":"778496ed-6d42-47d9-b47b-6379eb1664f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68d98854-9765-4bbb-abce-2dd0b4ff414b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-445000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-445000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (18.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-735000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0918 12:22:00.904848    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-arm64 start -p mount-start-1-735000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : (17.173616333s)
--- PASS: TestMountStart/serial/StartWithMountFirst (18.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-1-735000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-1-735000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-2-738000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-arm64 start -p mount-start-2-738000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=qemu2 : (17.343612916s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.2s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-738000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-738000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.20s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.1s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 delete -p mount-start-1-735000 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-145000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0918 12:24:19.789005    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:25:00.750720    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
E0918 12:25:38.957623    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-arm64 start -p multinode-145000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : (1m40.271619875s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-arm64 kubectl -p multinode-145000 -- rollout status deployment/busybox: (2.491945917s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-rc952 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-tv2bj -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-rc952 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-tv2bj -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-rc952 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-tv2bj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-rc952 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-rc952 -- sh -c "ping -c 1 192.168.105.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-tv2bj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-145000 -- exec busybox-5bc68d56bd-tv2bj -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-145000 -v 3 --alsologtostderr
E0918 12:26:06.667438    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/functional-847000/client.crt: no such file or directory
E0918 12:26:22.671437    1668 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17263-1251/.minikube/profiles/ingress-addon-legacy-356000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-arm64 node add -p multinode-145000 -v 3 --alsologtostderr: (35.505644708s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.68s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp testdata/cp-test.txt multinode-145000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile239082523/001/cp-test_multinode-145000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000:/home/docker/cp-test.txt multinode-145000-m02:/home/docker/cp-test_multinode-145000_multinode-145000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m02 "sudo cat /home/docker/cp-test_multinode-145000_multinode-145000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000:/home/docker/cp-test.txt multinode-145000-m03:/home/docker/cp-test_multinode-145000_multinode-145000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m03 "sudo cat /home/docker/cp-test_multinode-145000_multinode-145000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp testdata/cp-test.txt multinode-145000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile239082523/001/cp-test_multinode-145000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000-m02:/home/docker/cp-test.txt multinode-145000:/home/docker/cp-test_multinode-145000-m02_multinode-145000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000 "sudo cat /home/docker/cp-test_multinode-145000-m02_multinode-145000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000-m02:/home/docker/cp-test.txt multinode-145000-m03:/home/docker/cp-test_multinode-145000-m02_multinode-145000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m03 "sudo cat /home/docker/cp-test_multinode-145000-m02_multinode-145000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp testdata/cp-test.txt multinode-145000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile239082523/001/cp-test_multinode-145000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000-m03:/home/docker/cp-test.txt multinode-145000:/home/docker/cp-test_multinode-145000-m03_multinode-145000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000 "sudo cat /home/docker/cp-test_multinode-145000-m03_multinode-145000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 cp multinode-145000-m03:/home/docker/cp-test.txt multinode-145000-m02:/home/docker/cp-test_multinode-145000-m03_multinode-145000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-145000 ssh -n multinode-145000-m02 "sudo cat /home/docker/cp-test_multinode-145000-m03_multinode-145000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-592000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (91.605833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-592000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17263-1251/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17263-1251/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-592000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-592000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (40.401083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-592000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-592000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-592000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-592000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.204916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-592000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-933000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-933000 -n old-k8s-version-933000: exit status 7 (28.947583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-933000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-249000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-249000 -n no-preload-249000: exit status 7 (28.132875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-249000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-330000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-330000 -n embed-certs-330000: exit status 7 (27.307208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-330000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-884000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (27.63075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-884000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-363000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-363000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-363000 -n newest-cni-363000: exit status 7 (28.489208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-363000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/260)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-716000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-716000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-716000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-716000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-716000"

                                                
                                                
----------------------- debugLogs end: cilium-716000 [took: 2.303998125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-716000
--- SKIP: TestNetworkPlugins/group/cilium (2.56s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-684000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-684000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard